Next Article in Journal
Enhancing Precision of Telemonitoring of COVID-19 Patients through Expert System Based on IoT Data Elaboration
Previous Article in Journal
Concurrent Learning-Based Two-Stage Predefined-Time System Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Frame Loss Effects on Visual Fatigue in Super Multi-View 3D Display Technology

1
School of Electronics and Information Technology, Sun Yat-sen University, Haizhu District, Guangzhou 510275, China
2
School of Physics, Sun Yat-sen University, Haizhu District, Guangzhou 510275, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2024, 13(8), 1461; https://doi.org/10.3390/electronics13081461
Submission received: 13 March 2024 / Revised: 10 April 2024 / Accepted: 11 April 2024 / Published: 12 April 2024
(This article belongs to the Section Microelectronics)

Abstract

:
Super multi-view (SMV) display is a promising 3D display technology; however, potential frame loss due to bandwidth-limited video transmission could cause discomfort to viewers. Thus, an evaluation of the acceptable viewing experience will be valuable. This study investigates the effects of frame loss on visual fatigue in SMV display, focusing on quantified frame loss rates and varying frame loss modes. Experiments were conducted with 20 subjects, utilizing the Stroop test through an SMV display system to evaluate the visual fatigue under different frame loss conditions. The results show a rise in visual fatigue as the frame loss rate increases, with two critical thresholds identified. A 4% frame loss rate marks the threshold for significant loss-induced differences, beyond which visual fatigue begins to become significant in loss-induced modes compared to the normal loss-free mode. Subsequently, a 10% frame loss rate marks the threshold for significant mode-dependent differences, beyond which variations appear between different loss-induced modes, with monocular mode inducing more visual fatigue than binocular and dual-view more than single-view. Consequently, the findings advocate for refining the 3D video processing to maintain a frame loss rate below 4% for negligible effect and considering the interactions between different views for less visual fatigue. This research aims to provide insights and guidance for addressing potential challenges in developing and applying SMV display technology.

1. Introduction

Super multi-view (SMV) 3D display technology [1,2] is one of the leading solutions to the vergence–accommodation conflict (VAC) [3,4], which is a common cause of visual fatigue in traditional 3D displays. It achieves this by utilizing densely arranged viewpoints around the pupil, creating a natural vision-like defocus blurring effect, thus allowing for monocular focusing [5,6]. However, while maintaining the resolution and refresh rate of each viewpoint image, increasing the number of viewpoints poses a significant challenge of exponential growth in video data volume, which makes it more susceptible to packet losses. And packet losses of compressed video during bandwidth-limited transmission in networks can cause whole-frame losses [7,8], resulting in the receiver’s failure to correctly decode and display specific images of certain viewpoints. For instance, the high-refresh-rate (up to 360 Hz and even 480 Hz) and high resolution (4 K and even 8 K) video displays already available on the market intensify the bandwidth demands for SMV 3D video transmission and elevate the risk of frame loss, posing an urgent issue to assess. Such frame loss significantly degrades the video quality and can cause viewer discomfort, such as visual fatigue, resulting in an unacceptable 3D viewing experience. This issue may hinder the technology’s application; however, related evaluation remains unexplored. Therefore, investigating the effects of frame loss on visual fatigue within SMV 3D displays is of great significance for its development and application, especially with a growing refresh rate and resolution.
Research on the viewing experience of 3D displays has extensively explored both aspects of visual fatigue and video data loss, providing valuable insights. In the aspect of visual fatigue, Hoffman [9] examined how focusing cues affect perceptual distortion, fusion, and visual fatigue. Similarly, Kim [10] studied the rate of change of VAC on discomfort. Researchers also employ multiple techniques like eye-tracking [11], ECG, and EOG [12] to record subjects’ responses and visual fatigue when they were watching 3D video. For the practical measurement of VAC, Byoungho Lee’s group employed lens arrays and elemental images to measure the accommodative response [13] and further utilized an optometric device to objectively evaluate accommodative responses while viewing a real object and integral imaging [14]. Yasuhiro Takaki [15] utilized a binocular open-view Shack–Hartmann wavefront sensor to measure vergence and accommodative responses and concluded that SMV displays reduced VAC. However, existing research on visual fatigue primarily focuses on VAC issues, while studies exploring VAC-free 3D technologies, such as SMV displays and associated new challenges, are still limited.
Cognitive fatigue is crucial for evaluating 3D visual fatigue [16,17], as it originates from cognitive processing decline, a primary factor in visual fatigue. Typically, cognitive fatigue arises from intense focusing, depleting neural resources. Hence, understanding 3D visual fatigue necessitates a focus on cognitive fatigue. Study [18] used the oddball paradigm, a commonly used task for cognitive and attention measurement, to evaluate the impacts of passive polarized stereoscopic 3D displays on visual and mental fatigue. Additionally, in study [19], the pupil size change method, which is affected by cognitive load, was utilized to be an indicator for 3D cognitive fatigue. Furthermore, the Stroop test [20,21,22], known as a gold-standard test for assessing cognitive function, focuses on sustained attention and response inhibition with exclusively visual stimuli requiring cognitive load. This makes the Stroop test central to our study for effectively evaluating visual fatigue induced by frame loss.
Research has also delved into the video’s quality of experience (QoE) under conditions of video data loss [23,24], but the main focus has been on the 2D video domain, while 3D videos face even more challenges on bandwidth-limited channels. Hasan’s research [25] provides valuable insights into the effects of data loss on the QoE of 3D video. Employing subjective evaluations, it studies data loss in both views versus in a single view, along with high and low loss rates. However, to establish a practical standard addressing the frame loss in real-world conditions, thorough quantitative research is required, and thus, it is necessary to adopt a comprehensive testing method to estimate viewers’ visual fatigue. In addition, the SMV display presents more complexity with multiple viewpoints, elevating the importance of understanding the different effects of varying frame loss modes.
This study investigates the relationship between frame loss and visual fatigue with quantified frame loss rates and different frame loss modes. The Stroop test is used to evaluate visual fatigue, covering a thorough set of frame loss conditions. The results are intended to offer perspectives for overcoming potential problems in advancing and deploying SMV 3D display technology.
The study is structured as follows: Section 2 delineates the adopted experimental materials and methods based on the SMV 3D display system and the Stroop test. Section 3 employs statistical methods to reveal the core results of the Stroop test and analyze their interpretation for visual fatigue. A general discussion steered by the results is presented in Section 4.

2. Materials and Methods

2.1. Experiment Device

This study’s SMV display system [5] comprises two essential components: a display screen showcasing parallax images and near-eye time-multiplexed apertures consisting of liquid crystal strip-type shutter arrays. These apertures are organized horizontally at intervals smaller than the diameter of a human pupil, effectively directing different viewpoints’ parallax images to distinct locations in the pupil. The apertures possess time-multiplexed attributes, synchronizing with the refresh rate of the display screen. Once the computer’s video output timing is configured, the video playback software starts to stream the video data at a consistent frame rate. The video data is generally buffered, processed, and relayed to the display screen by a Field Programmable Gate Array (FPGA) development board. Concurrently, the FPGA development board generates the control signals, ensuring synchronization between the apertures’ opening and the corresponding video images refresh on the display screen.
Figure 1 demonstrates four viewpoints of the 3D displayed scene with a video fresh rate of 1 / t : V L 1 and V L 2 for the left eye, V R 1 and V R 2 for the right eye. Each viewpoint is associated with a timing-aperture controlled by synchronization signals. Four timing-apertures A L 1 , A R 1 , A L 2 , and A R 2 are turned on sequentially with a time interval of t / 4 , providing four view zones for two eyes. During each time interval, a perspective view converging to a viewpoint on this turned-on aperture is refreshed by the display screen synchronously. For the short time interval, the viewer feels the four light rays reach two pupils almost at the same time. Due to vision persistence, two rays merge into a single virtual light spot on the left eye’s retina, allowing for single-eye accommodation adjustment, a similar process in the right eye. Along with all spots, they collectively create the entire 3D displayed scene. However, when the focus shifts to a plane of different depths, such as the screen surface, the light rays for point P diverge on the retina, resulting in a defocusing blur effect. This effect prompts the automatic ocular response in the human eye, synchronizing accommodation distance with vergence distance and effectively rectifying VAC.

2.2. Frame Loss Setting

In our experiments, the actual manifestation of frame loss is represented by the insertion of black frames. We artificially created this effect by substituting normal frames with black images in the sequence of views during playback, thus simulating the experience of frame loss in the video. In the context of a four-viewpoint SMV display system, there are generally four frame loss-induced modes based on whether there is frame loss in each viewpoint: (1) monocular single-view mode, (2) monocular double-view mode, (3) binocular single-view mode, and (4) binocular double-view mode. Figure 2 illustrates these modes with a check mark (√) representing no frame loss in this view and a cross (×) representing the presence of frame loss. In this paper, these modes will be referred to as “v1”, “v2”, “v3” and “v4”, with “v0” denoting the baseline mode of no frame loss in all views.
Nine distinct frame loss rates were investigated for the effects on the visual fatigue: 3%, 4%, 5%, 7.5%, 10%, 12.5%, 15%, 17.5%, and 20%. Frame loss rates are calculated by the ratio of the number of lost frames to the number of total frames in all four views over a specific time, as listed in Equation (1):
R l o s s = N l o s s N t o t a l × 100 %
where R l o s s is the frame loss rate, N l o s s is the number of lost frames and N t o t a l is the number of total frames. Under a specific frame loss-induced mode and frame loss rate, the lost frames are randomly and uniformly distributed across the four views. This setup simulates conditions where bandwidth limitations are either concentrated in a single video channel or uniformly separated across multiple video channels.

2.3. Stroop Test and Experiment Objects

The Stroop test can be conducted by comparing the results of two distinct tasks. One is the Reading Task, where the subject has to read aloud characters that denote colors (such as ‘red’, ‘blue’, ‘green’, and ‘yellow’) printed in a uniform color. The other is the Interference Task, where the subject has to identify the color of the print of the characters while the characters represent different colors semantically. For example, when facing a character ‘red’ printed in yellow, the subject is expected to read aloud ‘yellow’ instead of ‘red’. Variations in the subject’s performance between the reading task and the interference task provide crucial insights into their visual processing condition. The intricate depth cues in 3D display require significant cognitive effort, and frame loss may intensify this cognitive load, leading to longer response time and poorer accuracy in the Stroop tasks, which offers a sensitive and reliable measurement of visual fatigue induced by frame loss.
The experiment setup is depicted in Figure 3a. Autodesk 3ds Maxis used to create each viewpoint’s image of the 3D test object, as shown in Figure 3b. The image consists of a matrix of Chinese characters (, , , and 绿, which correspond to the colors red, yellow, blue, and green, respectively) arranged randomly in 8 columns and 10 rows. In order to create varying depth perceptions to facilitate better focus among subjects and smooth test progression without losing track of lines during reading, the characters are positioned and sized at distinct distances from the virtual camera in 3Dmax, as depicted in Figure 3c: those in odd and even rows stand 35 cm and 40 cm outside of the screen, respectively, and the object was watched from 1 m away from the screen. Despite the depth variations, the display size of all characters is consistent on the screen. This configuration requires subjects to adjust their focus to discern characters at different depths, enhancing depth perception and enabling concentration on reading each line. Ten unique test sets were generated and randomly selected for each test session to minimize familiarity biases. Calibration ensured that the color was distributed uniformly and without repetition in sequence, with each character and color appearing twice per row. Importantly, the characters never appear in the colors that they semantically indicate, instead in the three alternative colors with equal frequency.

2.4. Subjects and Display Device

In the experiment, 20 subjects from 20 to 24 years old (mean age 21.74 ± 1.01 years; 16 males and 4 females) are selected. Every subject achieves a visual acuity of at least 1.0 with/without glasses. Using the random dot chart, all subjects demonstrated a stereoscopic acuity better than 40 arcsecs and successfully fused with the 3D displayed scene in subsequent trials. Color vision tests confirmed that no subjects suffer from achromatopsia. The pupillary distance of every subject is measured, averaging 64.79 ± 1.36 mm, for the adjustment of the distance between the left and right eye’s virtual cameras in 3Dmax to ensure each subject can perceive the same disparity and reduce visual discrepancies, which leads to the variance in visual fatigue.
The display device is a 27-inch monitor (LG 27GK750F) with a 1920 × 1080 resolution and supports a refresh rate of up to 240 Hz. The video stream was set at a 1080 × 720 resolution with a 120 Hz refresh rate, considering the processing and transmitting capability of the FPGA development board. The monitor operates at a 120 Hz frame rate, sequentially displaying images of four different viewpoints. These images are projected to various areas of the pupil in synchronization with the opening of the corresponding liquid crystal apertures. Consequently, employing time-division multiplexing, the refresh rate for each viewpoint on the pupil is 30 Hz. Additionally, the horizontal interval of the liquid crystal apertures is 1.8 mm, and the video playback software KODI can consistently output high-frame-rate video.

2.5. Procedure and Testing

As shown in Figure 4a, the subjects were seated upright in front of the experimental table, directly facing the near-eye time-multiplexed apertures connected to the FPGA development board. The screen displaying a 3D scene was positioned one meter away from the apertures. Before the subjects watched the Stroop test objects, the apertures interval was adjusted to ensure optimal alignment with their eyes, which was crucial for viewpoint intensity homogeneity. Subsequently, the subjects positioned their eyes near the apertures, maintained the position fixed, and then began to watch the 3D Stroop test objects displayed on the screen.
As shown in Figure 4b, after a frame loss rate was uploaded to the SMV system, frame loss-free mode v0 was designated as the benchmark, and four associated frame loss-induced modes (v1, v2, v3, and v4) were organized randomly. Under the organized frame loss rate and frame loss mode, the Reading Task was initiated, succeeded by the Interference Task. The subjects were asked to complete the task as quickly as possible while maintaining consistent effort throughout. A 30 s closing eye rest was arranged after each task to alleviate visual fatigue. To negate the effects of familiarity, the order of frame loss rates and modes was randomized for all subjects. Subjects were advised to rest adequately before the experiment.
As a previous study [20] has mentioned, disparities in the completion time of the two tasks are pivotal in evaluating the Stroop test’s interference impact. Using Equation (2), the Stroop interference effect (SIE) is determined as the completion time differential between the interference and reading tasks [22]:
S I E =   T i T r
where T i is the completion time of the Interference Task and T r is the completion time of the Reading Task.

3. Results

It is practically meaningful to evaluate the frame loss effects by considering the differences between loss-free mode and loss-induced modes (i.e., the loss-induced differences) as well as the differences across different frame loss-induced modes (described as the mode-dependent differences).
To compare the changes in SIE values induced by four frame loss-induced modes, Figure 5 presents four differential lines illustrating the mean value of additional SIE among four frame loss-induced modes (v1 to v4) against the baseline v0. Table 1 displays the means and standard deviations (SD) of additional SIE for loss-induced modes at each specific frame loss rate.
Firstly, as observed in Figure 5, there is a monotonic growth in additional SIE across all modes with the rise in frame loss rate. At most frame loss rates, mode v2 (monocular double-view) consistently acquires the highest additional SIE value, indicating the most significant impact on visual fatigue. In contrast, mode v3 (binocular single-view) shows the lowest value, suggesting the mildest impact under similar conditions. This result can primarily provide guidance that, for applications susceptible to unavoidable frame loss rates, mode v3 (binocular single-view) is recommended to minimize the risk of visual fatigue, whereas mode v2 (monocular double-view) should be avoided.
Further, three distinct stages of frame loss rates seem to be observed in Figure 5. Initially, at 3% and 4% frame loss rates, the additional SIE for four modes remains minimal, around 1s. However, within the frame loss rate from 5% to 10%, the additional SIE of all modes begins to exceed 2 s and increase significantly. At a 10% frame loss rate, the SD reaches 0.74, before which it fluctuates without significant increases and then grows monotonically. Subsequently, the four lines begin to exhibit divergent growth, with mode v2 showing the most pronounced increase and mode v3 the mildest.
To determine the statistical significance of the disparities observed across the stages, a repeated measures analysis of variance (ANOVA) was conducted for further data analysis. ANOVA, a statistical method extensively utilized in human factors engineering, assesses between-group and within-group variances to evaluate whether observed differences are statistically significant rather than attributable to random variations, thereby enabling statistically significant conclusions. Its variant, repeated measures ANOVA, is specifically employed when the same subjects undergo multiple tests to analyze changes across these conditions [26,27]. For our experiment, we considered the necessity of evaluating the same group of subjects across varied conditions and ensuring the SIE data met parametric prerequisites, including normal distribution, sphericity, and homogeneity of variances. Thus, we opted for repeated measures ANOVA to examine the variance across different frame loss modes at each frame loss rate.
Statistical analysis was conducted using IBM SPSS Statistics 27 software. The mean SIE values and 95% confidence intervals (CIs) under five different frame loss modes at each of the nine frame loss rates are shown to represent the data’s variability and reliability. Then, in the results of the repeated measures ANOVA, three key metrics are discussed. (1) The F-value quantifies the relative variance caused by different frame loss modes compared to random error. A higher F-value indicates that the impacts of various frame loss modes exhibit more significant distinctions. (2) The p-value quantifies the statistical significance, with thresholds represented by asterisks: p < 0.05 (marked as *) denotes significant differences, p < 0.01 (marked as **) denotes highly significant differences, and p < 0.001 (marked as ***) denotes extremely significant differences. (3) The partial η 2 value, which quantifies the size of effects, indicates that higher values signify more substantial differences or relationships between study variables. If significant mode differences at certain frame loss rates are revealed using repeated measures ANOVA, further exploration is warranted through post hoc paired comparisons utilizing the least significant difference (LSD) method. This yielded the mean differences (MD) among two frame loss modes and their respective p-values, affirming the statistical significance of the variations found. Figure 6 presents the mean SIE values and 95% CIs of 20 subjects under the five frame loss modes at nine frame loss rates. The detailed repeated measures ANOVA results along with post hoc results, are listed in Table 2.
As shown in Figure 6a,b, the SIE values for the five frame loss modes all show no significant statistical differences at low frame loss rates of 3% (F = 0.570, p = 0.655, partial η 2 = 0.029) and 4% (F = 1.14, p = 0.341, partial η 2 = 0.057), indicating that the loss-induced differences remain nonsignificant at these low frame loss rates. However, at a 5% frame loss rate in Figure 6c, a turning point could be noticed, where significant differences begin to emerge (F = 5.748, p = 0.002 **, partial η 2 = 0.232). Furthermore, in Table 2, post-hoc paired comparison results show significant differences between each of the four frame loss-induced modes (v1 to v4) and the loss-free mode (v0). The emergence of significant differences marks the turning point where the loss-induced differences in visual fatigue, as reflected by SIE, become significant beyond a 4% frame loss rate, showing visible visual fatigue difference between the loss-free mode v0 and the loss-induced modes v1 to v4. The loss-induced differences intensify as the frame loss rate increases to 7.5% and 10%, resulting in a concomitant rise in F-values, p-values, and partial η 2 -values. Therefore, while the impact of frame loss on SIE is still negligible at lower rates of 3% and 4%, it becomes substantial upon reaching 5%, thereby establishing 4% as a frame loss rate threshold for significant loss-induced Effect Differences.
The mode-dependent differences, which do not manifest at a 10% frame loss rate, are revealed at 12.5% between frame loss-induced modes v2 and v3 (MD = 2.52, p = 0.016 *), as shown in Figure 6e,f. Larger mode-dependent differences appear at a 15% frame loss rate, where the differences reach statistical significance between v1 and v2 (MD = −1.95, p = 0.035 *), v2 and v3 (MD = 3.22, p = 0.003 **). At 17.5% and 20% frame loss rates, significant differences are observed in four (v1–v2, v2–v3, v2–v4, v3–v4) and five (v1–v2, v1–v3, v2–v3, v2–v4, v3–v4) comparison pairs of loss-induced modes, respectively. This trend highlights increasing mode-dependent differences. Consequently, 10% can be regarded as the second frame loss rate threshold for significant mode-dependent differences, beyond which the necessity to consider the different impacts of various frame loss-induced modes is indicated.
The analysis indicates that in the bandwidth-limited transmission where frame loss occurs, optimizing the encoding and decoding of 3D video based on frame loss rate is feasible. Firstly, strategies should aim to keep the frame loss rate at or below 4% to ensure the impact is negligible. If bandwidth allows for sustaining frame loss at this level, increasing the number of viewpoints is feasible to enhance the 3D perceptual. Simultaneously, it is advisable to refine the 3D video processing based on the varied impacts of different loss-induced modes and the interactions between views, placing emphasis on modes such as v3. This approach may help reduce viewer discomfort and preserve the viewing experience.

4. Discussion and Summary

This study investigated the effects of frame loss on visual fatigue within SMV displays. A total of 20 subjects were involved in the experimental process, where the Stroop test, conducted via an SMV 3D display system, was used to assess visual fatigue across various frame loss scenarios.
Regarding the frame loss rate, the results indicate a rise in visual fatigue in correlation with increasing frame loss rates, with a threshold for significant loss-induced differences identified at 4%. Beyond this threshold, the effects of loss-induced modes become more significant compared to those of loss-free mode. This aligns with prior studies [28,29], which confirmed the relationship between higher frame loss rate and worse viewing experience. Subsequently, the threshold for significant mode-dependent differences can be set at 10%, over which it becomes necessary to consider the different impacts of various loss-induced modes.
As for the frame loss modes, it was found that significant differences emerge between monocular and binocular modes, as well as single-view and dual-view modes. The results demonstrate that monocular modes induce greater visual fatigue than binocular modes. Specifically, monocular v1 causes more visual fatigue in single-view modes than binocular v3. In dual-view modes, monocular v2 has a more significant impact than binocular v4. This phenomenon may be attributed to binocular rivalry [30,31], where the brain encounters more challenges in processing mismatched visual information from both eyes, leading to increased visual fatigue. In this case, frame loss introduces irregular flickering, resulting in a noticeable disparity and rapid changes in luminance between the affected and unaffected eyes. Most subjects reported more discomfort in monocular modes, with symptoms such as eye strain, dizziness, and a decline in responsiveness. Decreasing luminance in one eye can significantly affect monocular performance in aspects such as visual acuity, contrast sensitivity, and stereoacuity [32]. In 3D display, sensory fusion requires not only the alignment of images on retinal areas but also their sufficient similarity in brightness; otherwise, it can lead to a decline in 3D perceptual ability [33]. In our experiment, the binocular rivalry of luminance differences resulting from frame loss may be one of the causes influencing cognitive performance, thus leading to visual fatigue.
When comparing single-view with dual-view modes, the latter induced more visual fatigue. For instance, in monocular conditions, dual-view v2 causes greater visual fatigue than single-view v1, and similarly, in binocular conditions, dual-view v4 also exceeds the single-view v3. This could result from dual-view modes covering a larger pupil area, leading to stronger visual perception. Subjects also reported a greater range and magnitude of visual changes with dual-view modes compared to single-view.
Furthermore, visual fatigue might result from VAC induced by frame loss during 3D video observation. Figure 7 illustrates the binocular viewing under conditions of frame loss. A R and A L represent the potential accommodation distances for the right and left eyes, respectively. In Figure 7a, no frame loss occurs, and all four views are received. The accommodation distances for both eyes are aligned towards the displayed point P , enabling VAC-free observation. However, in Figure 7b, frame loss in one view of the left eye might trigger a change to a 2D visual perception in the left eye due to only one viewpoint in this eye, causing the left eye to focus on the screen instead. This change could lead to the focus mismatch between the left eye (on the screen) and the right eye (on the virtual light spot P). Additionally, Figure 7c shows a scenario where the left eye experiences frame loss in both views simultaneously and receives no images, resulting in an unknown focus distance. The accommodation distance fluctuations within a single eye and inconsistencies between the left and right eyes might be one of the causes of visual fatigue.
In summary, this research contributes to understanding visual fatigue in SMV 3D display, emphasizing the importance of both frame loss rates and modes. Practical relationships are preliminarily established, which can inform future work on the design of SMV display systems and refining 3D video encoding and decoding methods to reduce frame loss effects and enhance the viewing experience.
In the future, research can further expand. In this study, a particular scenario within SMV display was concentrated on. As 3D display technology advances, considering a broader spectrum of 3D technologies and more diverse frame loss scenarios is important. Additionally, while the Stroop test was employed to assess visual fatigue, alternative methods might be more suitable for reflecting the effects of frame loss. Lastly, the relatively limited experimental subject group, mainly university students, suggests the need for a more diverse subject pool to enhance the research’s generality. The research improvements mentioned above may be supportive.

Author Contributions

Conceptualization, D.T., Y.C., J.W. and L.L.; methodology, D.T., Y.C. and J.W.; software, H.F.; validation, H.F.; formal analysis, H.F.; investigation, H.F., J.L., S.W. and J.Z.; resources, D.T., Y.C., J.W., L.L. and Z.C.; data curation, H.F., J.L., S.W. and J.Z.; writing—original draft preparation, H.F.; writing—review and editing, H.F., Y.C. and J.W.; visualization, H.F.; supervision, D.T., Y.C., J.W., Z.C. and L.L.; project administration, H.F., Y.C. and J.W.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is sponsored by the Guangdong Province’s Key Research and Development Program, grant number 2019B010152001, and the National Key Research and Development Program, grant number 2021YFF0701001.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors wish to thank HaiKun Huang of the School of Electronics and Information Technology for the technical support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ueno, T.; Takaki, Y. Approximated Super Multi-View Head-Mounted Display to Reduce Visual Fatigue. Opt. Express 2020, 28, 14134–14150. [Google Scholar] [CrossRef] [PubMed]
  2. Wan, W.; Qiao, W.; Pu, D.; Chen, L. Super Multi-View Display Based on Pixelated Nanogratings under an Illumination of a Point Light Source. Opt. Lasers Eng. 2020, 134, 106258. [Google Scholar] [CrossRef]
  3. Yano, S.; Ide, S.; Mitsuhashi, T.; Thwaites, H. A Study of Visual Fatigue and Visual Comfort for 3D HDTV/HDTV Images. Displays 2002, 23, 191–201. [Google Scholar] [CrossRef]
  4. Ukai, K.; Howarth, P.A. Visual Fatigue Caused by Viewing Stereoscopic Motion Images: Background, Theories, and Observations. Displays 2008, 29, 106–116. [Google Scholar] [CrossRef]
  5. Sun, D.; Wang, C.; Teng, D.; Liu, L. Three-Dimensional Display on Computer Screen Free from Accommodation-Convergence Conflict. Opt. Commun. 2017, 390, 36–40. [Google Scholar] [CrossRef]
  6. Liu, L.; Ye, Q.; Pang, Z.; Huang, H.; Lai, C.; Teng, D. Polarization Enlargement of FOV in Super Multi-View Display Based on near-Eye Timing-Apertures. Opt. Express 2022, 30, 1841–1859. [Google Scholar] [CrossRef] [PubMed]
  7. Zhou, Y.; Xiang, W.; Wang, G. Frame Loss Concealment for Multiview Video Transmission Over Wireless Multimedia Sensor Networks. IEEE Sens. J. 2015, 15, 1892–1901. [Google Scholar] [CrossRef]
  8. Chang, Y.-L.; Lin, T.-L.; Cosman, P.C. Network-Based H.264/AVC Whole-Frame Loss Visibility Model and Frame Dropping Methods. Ieee Trans. Image Process. 2012, 21, 3353–3363. [Google Scholar] [CrossRef] [PubMed]
  9. Hoffman, D.M.; Girshick, A.R.; Akeley, K.; Banks, M.S. Vergence-Accommodation Conflicts Hinder Visual Performance and Cause Visual Fatigue. J. Vis. 2008, 8, 33. [Google Scholar] [CrossRef] [PubMed]
  10. Kim, J.; Kane, D.; Banks, M.S. The Rate of Change of Vergence-Accommodation Conflict Affects Visual Discomfort. Vis. Res. 2014, 105, 159–165. [Google Scholar] [CrossRef] [PubMed]
  11. Iatsun, I.; Larabi, M.-C.; Fernandez-Maloigne, C. Investigation and Modeling of Visual Fatigue Caused by S3D Content Using Eye-Tracking. Displays 2015, 39, 11–25. [Google Scholar] [CrossRef]
  12. Yang, X.; Wang, D.; Hu, H.; Yue, K. P-31: Visual Fatigue Assessment and Modeling Based on ECG and EOG Caused by 2D and 3D Displays. SID Symp. Dig. Tech. Pap. 2016, 47, 1237–1240. [Google Scholar] [CrossRef]
  13. Kim, Y.; Hong, K.; Kim, J.; Yang, H.K.; Hwang, J.-M.; Lee, B. Accommodation Measurement According to Angular Resolution Density in Three-Dimensional Display; Blankenbach, K., Chien, L.-C., Lee, S.-D., Wu, M.H., Eds.; SPIE: San Francisco, CA, USA, 2011; p. 79560Q. [Google Scholar] [CrossRef]
  14. Kim, Y.; Kim, J.; Hong, K.; Yang, H.K.; Jung, J.-H.; Choi, H.; Min, S.-W.; Seo, J.-M.; Hwang, J.-M.; Lee, B. Accommodative Response of Integral Imaging in Near Distance. J. Disp. Technol. 2012, 8, 70–78. [Google Scholar] [CrossRef]
  15. Mizushina, H.; Nakamura, J.; Takaki, Y.; Ando, H. Super Multi-View 3D Displays Reduce Conflict between Accommodative and Vergence Responses: SMV Displays Solve Vergence-Accommodation Conflict. J. Soc. Inf. Disp. 2016, 24, 747–756. [Google Scholar] [CrossRef]
  16. Inama, M.; Spolverato, G.; Impellizzeri, H.; Bacchion, M.; Creciun, M.; Casaril, A.; Moretto, G. Cognitive Load in 3d and 2d Minimally Invasive Colorectal Surgery. Surg. Endosc. 2020, 34, 3262–3269. [Google Scholar] [CrossRef] [PubMed]
  17. Souchet, A.D.; Philippe, S.; Lourdeaux, D.; Leroy, L. Measuring Visual Fatigue and Cognitive Load via Eye Tracking While Learning with Virtual Reality Head-Mounted Displays: A Review. Int. J. Hum. Comput. Interact. 2021, 38, 801–824. [Google Scholar] [CrossRef]
  18. Amin, H.U.; Malik, A.S.; Mumtaz, W.; Badruddin, N.; Kamel, N. Evaluation of Passive Polarized Stereoscopic 3D Display for Visual & Mental Fatigues. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 7590–7593. [Google Scholar] [CrossRef]
  19. Park, S.; Mun, S.; Lee, D.W.; Whang, M. IR-Camera-Based Measurements of 2D/3D Cognitive Fatigue in 2D/3D Display System Using Task-Evoked Pupillary Response. Appl. Opt. 2019, 58, 3467–3480. [Google Scholar] [CrossRef] [PubMed]
  20. MacLeod, C.M. Half a Century of Research on the Stroop Effect: An Integrative Review. Psychol. Bull. 1991, 109, 163–203. [Google Scholar] [CrossRef] [PubMed]
  21. Rauch, W.A.; Schmitt, K. Fatigue of Cognitive Control in the Stroop-Task. Proc. Annu. Meet. Cogn. Sci. Soc. 2009, 37, 750–755. [Google Scholar]
  22. Daniel, F.; Kapoula, Z. Induced Vergence-Accommodation Conflict Reduces Cognitive Performance in the Stroop Test. Sci. Rep. 2019, 9, 1247. [Google Scholar] [CrossRef] [PubMed]
  23. Zhao, T.; Liu, Q.; Chen, C.W. QoE in Video Transmission: A User Experience-Driven Strategy. IEEE Commun. Surv. Tutor. 2017, 19, 285–302. [Google Scholar] [CrossRef]
  24. Su, G.-M.; Su, X.; Bai, Y.; Wang, M.; Vasilakos, A.; Wang, H. QoE in Video Streaming over Wireless Networks: Perspectives and Research Challenges. Wirel. Netw. 2015, 22, 1571–1593. [Google Scholar] [CrossRef]
  25. Hasan, M.M.; Hossain, M.d.A.; Alotaibi, N.; Arnold, J.F.; Azad, A. Binocular Rivalry Impact on Macroblock-Loss Error Concealment for Stereoscopic 3D Video Transmission. Sensors 2023, 23, 3604. [Google Scholar] [CrossRef] [PubMed]
  26. Park, E.; Cho, M.; Ki, C.-S. Correct Use of Repeated Measures Analysis of Variance. Ann. Lab. Med. 2009, 29, 1–9. [Google Scholar] [CrossRef] [PubMed]
  27. Armstrong, R.A.; Eperjesi, F.; Gilmartin, B. The Application of Analysis of Variance (ANOVA) to Different Experimental Designs in Optometry. Ophthalmic Physiol. Opt. 2002, 22, 248–256. [Google Scholar] [CrossRef] [PubMed]
  28. Pastrana-Vidal, R.R.; Gicquel, J.C.; Colomes, C.; Cherifi, H. Sporadic Frame Dropping Impact on Quality Perception; Rogowitz, B.E., Pappas, T.N., Eds.; SPIE: San Jose, CA, USA, 2004; p. 182. [Google Scholar] [CrossRef]
  29. Tommasi, F.; De Luca, V.; Melle, C. Packet Losses and Objective Video Quality Metrics in H.264 Video Streaming. J. Vis. Commun. Image Represent. 2015, 27, 7–27. [Google Scholar] [CrossRef]
  30. Grossberg, S.; Yazdanbakhsh, A.; Cao, Y.; Swaminathan, G. How Does Binocular Rivalry Emerge from Cortical Mechanisms of 3-D Vision? Vis. Res. 2008, 48, 2232–2250. [Google Scholar] [CrossRef] [PubMed]
  31. Patterson, R.; Winterbottom, M.; Pierce, B.; Fox, R. Binocular Rivalry and Head-Worn Displays. Hum. Factors 2007, 49, 1083–1096. [Google Scholar] [CrossRef] [PubMed]
  32. Chang, Y.-H.; Lee, J.B.; Kim, N.S.; Lee, D.W.; Chang, J.H.; Han, S.-H. The Effects of Interocular Differences in Retinal Illuminance on Vision and Binocularity. Graefe’s Arch. Clin. Exp. Ophthalmol. 2006, 244, 1083–1088. [Google Scholar] [CrossRef] [PubMed]
  33. Lovasik, J.V.; Szymkiw, M. Effects of Aniseikonia, Anisometropia, Accommodation, Retinal Illuminance, and Pupil Size on Stereopsis. Investig. Ophthalmol. Vis. Sci. 1985, 26, 741–750. [Google Scholar]
Figure 1. The SMV display system based on near-eye time-multiplexed apertures. Firstly, video data is transmitted from the computer to the FPGA development board, which generates synchronization signals for the four apertures and uploads video data to the display screen. In the figure, ‘ P ’ on the ‘Display screen’ stands for pixel, ‘ V ’ for viewpoint, and ‘ A ’ for aperture. The subscript ‘ R ’ denotes the right eye, while ‘ L ’ denotes the left eye; the numbers ‘1′ and ‘2′ indicate the two different views for each eye. With the displayed point P as an example, two light rays, P L 1 V L 1 and P L 2 V L 2 , which emit from the two pixels P L 1 and P L 2 on the display screen, reach the left eye via apertures A L 1 and A L 2 respectively. These two light rays correspond to the left eye’s two distinct perspectives of 3D displayed point P , denoted by viewpoints V L 1 and V L 2 . A similar process occurs in the right eye; the two light rays P R 1 V R 1 and P R 2 V R 2 create the two viewpoints V R 1 and V R 2 . The synchronization signals control the four apertures, ensuring that each aperture opens sequentially during a time period of t / 4 , while the other three apertures remain closed. These four light rays are sequentially perceived by two eyes within a given time interval t / 4 , facilitating the complete perception of the 3D displayed point P .
Figure 1. The SMV display system based on near-eye time-multiplexed apertures. Firstly, video data is transmitted from the computer to the FPGA development board, which generates synchronization signals for the four apertures and uploads video data to the display screen. In the figure, ‘ P ’ on the ‘Display screen’ stands for pixel, ‘ V ’ for viewpoint, and ‘ A ’ for aperture. The subscript ‘ R ’ denotes the right eye, while ‘ L ’ denotes the left eye; the numbers ‘1′ and ‘2′ indicate the two different views for each eye. With the displayed point P as an example, two light rays, P L 1 V L 1 and P L 2 V L 2 , which emit from the two pixels P L 1 and P L 2 on the display screen, reach the left eye via apertures A L 1 and A L 2 respectively. These two light rays correspond to the left eye’s two distinct perspectives of 3D displayed point P , denoted by viewpoints V L 1 and V L 2 . A similar process occurs in the right eye; the two light rays P R 1 V R 1 and P R 2 V R 2 create the two viewpoints V R 1 and V R 2 . The synchronization signals control the four apertures, ensuring that each aperture opens sequentially during a time period of t / 4 , while the other three apertures remain closed. These four light rays are sequentially perceived by two eyes within a given time interval t / 4 , facilitating the complete perception of the 3D displayed point P .
Electronics 13 01461 g001
Figure 2. Four frame loss-induced modes: (a) monocular single-view (v1), (b) monocular double-view (v2), (c) binocular single-view (v3), and (d) binocular double-view (v4). A check mark (√) representing no frame loss in this view and a cross (×) representing the presence of frame loss.
Figure 2. Four frame loss-induced modes: (a) monocular single-view (v1), (b) monocular double-view (v2), (c) binocular single-view (v3), and (d) binocular double-view (v4). A check mark (√) representing no frame loss in this view and a cross (×) representing the presence of frame loss.
Electronics 13 01461 g002
Figure 3. The setup of the Stroop test: (a) the virtual camera perspective in the 3Dmax’s constructed scene, (b) the content and arrangement of the character matrix display on the screen, and (c) the depth variation setup for odd rows and even rows.
Figure 3. The setup of the Stroop test: (a) the virtual camera perspective in the 3Dmax’s constructed scene, (b) the content and arrangement of the character matrix display on the screen, and (c) the depth variation setup for odd rows and even rows.
Electronics 13 01461 g003
Figure 4. (a) The experimental environment setup and (b) the experimental procedure flowchart.
Figure 4. (a) The experimental environment setup and (b) the experimental procedure flowchart.
Electronics 13 01461 g004
Figure 5. The means of additional SIE among four loss-induced modes (v1 to v4) against the baseline (loss-free mode v0) across different frame loss rates.
Figure 5. The means of additional SIE among four loss-induced modes (v1 to v4) against the baseline (loss-free mode v0) across different frame loss rates.
Electronics 13 01461 g005
Figure 6. Bar charts comparing the mean values and 95% confidence intervals (CIs) of the Stroop interference effect (SIE) across five frame loss modes at nine different frame loss rates ( R l o s s ) of 20 subjects, with statistical significance denoted by asterisks: * denotes significant differences, ** denotes highly significant differences, and *** denotes extremely highly significant differences. Subfigures (ai) present the SIE at increasing frame loss rates of 3%, 4%, 5%, 7.5%, 10%, 12.5%, 15%, 17.5%, and 20% separately.
Figure 6. Bar charts comparing the mean values and 95% confidence intervals (CIs) of the Stroop interference effect (SIE) across five frame loss modes at nine different frame loss rates ( R l o s s ) of 20 subjects, with statistical significance denoted by asterisks: * denotes significant differences, ** denotes highly significant differences, and *** denotes extremely highly significant differences. Subfigures (ai) present the SIE at increasing frame loss rates of 3%, 4%, 5%, 7.5%, 10%, 12.5%, 15%, 17.5%, and 20% separately.
Electronics 13 01461 g006
Figure 7. Depiction of the accommodation response under frame loss: (a) no frame loss in all views, (b) frame loss in one view of the left eye, and (c) frame loss in both views of the left eye simultaneously.
Figure 7. Depiction of the accommodation response under frame loss: (a) no frame loss in all views, (b) frame loss in one view of the left eye, and (c) frame loss in both views of the left eye simultaneously.
Electronics 13 01461 g007
Table 1. The means and standard deviations (SD) of additional SIE against v0 for four loss-induced modes (v1 to v4) at each specific frame loss rate.
Table 1. The means and standard deviations (SD) of additional SIE against v0 for four loss-induced modes (v1 to v4) at each specific frame loss rate.
Frame Loss Rate3%4%5%7.5%10%12.5%15%17.5%20%
Means/(s)0.591.142.824.365.267.158.7411.1411.75
S D /(s)0.240.340.420.240.741.061.332.112.26
Table 2. Analysis results from repeated measures ANOVA and post-hoc comparisons on the SIE value across different frame loss modes at increasing frame loss rates. The repeated measures ANOVA results delineate F-values, p-values, and partial η 2 values to evaluate the significance of differences in all frame loss modes. The p-value quantifies the statistical significance, with thresholds represented by asterisks: p < 0.05 (marked as *) denotes significant differences, p < 0.01 (marked as **) denotes highly significant differences, and p < 0.001 (marked as ***) denotes extremely significant differences. Post hoc results, indicated by mean differences (MD) and p-values, evaluate the significance of pairwise comparison differences between two frame loss modes.
Table 2. Analysis results from repeated measures ANOVA and post-hoc comparisons on the SIE value across different frame loss modes at increasing frame loss rates. The repeated measures ANOVA results delineate F-values, p-values, and partial η 2 values to evaluate the significance of differences in all frame loss modes. The p-value quantifies the statistical significance, with thresholds represented by asterisks: p < 0.05 (marked as *) denotes significant differences, p < 0.01 (marked as **) denotes highly significant differences, and p < 0.001 (marked as ***) denotes extremely significant differences. Post hoc results, indicated by mean differences (MD) and p-values, evaluate the significance of pairwise comparison differences between two frame loss modes.
Frame Loss RateRepeated Measures ANOVA ResultsPost hoc Results
3%F = 0.570
p = 0.655
partial η2 = 0.029
/
4%F = 1.14
p = 0.341
partial η2 = 0.057
/
5%F = 5.75
p = 0.002 **
partial η2 = 0.232
v0–v1: MD = 2.58, p = 0.001 **
v0–v3: MD = −2.40, p = 0.001 **
v0–v2: MD = −2.95, p = 0.003 **
v0–v4: MD = −3.35, p < 0.001 ***
7.5%F = 8.76
p < 0.001 ***
partial η2 = 0.316
v0–v1: MD = −4.41, p < 0.001 ***
v0–v3: MD = −4.07, p < 0.001 ***
v0–v2: MD = −4.65, p < 0.001 ***
v0–v4: MD = −4.30, p < 0.001 ***
10%F = 10.28
p < 0.001 ***
partial η2 = 0.351
v0–v1: MD = −5.31, p < 0.001 ***
v0–v3: MD = −4.44, p < 0.001 ***
v0–v2: MD = −6.21, p < 0.001 ***
v0–v4: MD = −5.10, p < 0.001 ***
12.5%F = 22.36
p < 0.001 ***
partial η2 = 0.541
v0–v1: MD = −7.12, p < 0.001 ***
v0–v3: MD = −6.10, p < 0.001 ***
v2–v3: MD = 2.52, p = 0.016 *
v0–v2: MD = −8.62, p < 0.001 ***
v0–v4: MD = −6.77, p < 0.001 ***
15%F = 71.29
p < 0.001 ***
partial η2 = 0.790
v0–v1: MD = −8.44, p < 0.001 ***
v0–v3: MD = −7.18, p < 0.001 ***
v1–v2: MD = −1.95, p = 0.035 *
v0–v2: MD = −10.39, p < 0.001 ***
v0–v4: MD = −8.97, p < 0.001 ***
v2–v3: MD = 3.22, p = 0.003 **
17.5%F = 41.48
p < 0.001 ***
partial η2 = 0.686
v0–v1: MD = −10.54, p < 0.001 ***
v0–v3: MD = −8.69, p < 0.001 ***
v1–v2: MD = −3.20, p = 0.001 **
v3–v4: MD = −2.91, p = 0.009 **
v0–v2: MD = −13.73, p < 0.001 ***
v0–v4: MD = −11.60, p < 0.001 ***
v2–v3: MD = 5.05, p < 0.001 ***
v2–v4: MD = 2.13, p = 0.031 *
20%F = 66.57
p < 0.001 ***
partial η2 = 0.778
v0–v1: MD = −11.44, p < 0.001 ***
v0–v3: MD = −9.13, p < 0.001 ***
v1–v2: MD = −3.22, p = 0.004 **
v2–v3: MD = 5.53, p < 0.001 ***
v3–v4: MD = −2.63, p = 0.016 *
v0–v2: MD = −14.66, p < 0.001 ***
v0–v4: MD = −11.76, p < 0.001 ***
v1–v3: MD = 2.31, p = 0.012 *
v2–v4: MD = 2.90, p = 0.019 *
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fang, H.; Chen, Y.; Teng, D.; Luo, J.; Wu, S.; Zheng, J.; Wang, J.; Chen, Z.; Liu, L. Frame Loss Effects on Visual Fatigue in Super Multi-View 3D Display Technology. Electronics 2024, 13, 1461. https://doi.org/10.3390/electronics13081461

AMA Style

Fang H, Chen Y, Teng D, Luo J, Wu S, Zheng J, Wang J, Chen Z, Liu L. Frame Loss Effects on Visual Fatigue in Super Multi-View 3D Display Technology. Electronics. 2024; 13(8):1461. https://doi.org/10.3390/electronics13081461

Chicago/Turabian Style

Fang, Hongjin, Yu Chen, Dongdong Teng, Jin Luo, Siying Wu, Jianming Zheng, Jiahui Wang, Zimin Chen, and Lilin Liu. 2024. "Frame Loss Effects on Visual Fatigue in Super Multi-View 3D Display Technology" Electronics 13, no. 8: 1461. https://doi.org/10.3390/electronics13081461

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop