1. Introduction
Different perceptions of an identical object located in the different eye visual fields (VFs) are known as VF anisotropy. VF anisotropy may be evoked by the opponent processes of many neural functions in the visual system. For example, the visual input signals projected onto the retina from the left VF are carried to the right primary visual cortex (visual area 1; V1) and vice versa. Furthermore, in human visual processing, the input signals from V1 are projected to the prestriate cortex (visual area 2; V2) via the ventral stream, representing visual input derived from the natural world.
In terms of a visual input representation, Andersen et al. (1993) proposed that the spatial information’s representation is configured by collecting visual stimuli information that is formed by various coordinate transformations during visual processing [
1]. Furthermore, visual processing starts when the light rays hit the retina, and visual input signals are encoded in the retinal coordinates. Hereafter, the visual signals (retinal coordinates) are combined with the non-visual signals (extraretinal coordinates) in the brain to encode the visual stimuli. These extraretinal coordinates can be obtained from non-retinal coordinates. For example, first, head-centered coordinates refer to the head frame as the reference defined by integrating the retinal coordinates and position of the eye. Second, body-centered coordinates can be obtained by combining information regarding retinal, eye, and head positions. Third, world-centered coordinates are formed by collecting information of the head-centered coordinates and vestibular input (information source that senses the rotational movement for spatial updating).
In addition, in most recent studies focusing on perceptual differences among the VFs, the observers’ head was fixed, and the gaze was fixated on a reference object placed in the central VF. Many notable reports have been made on VF anisotropy (manipulating retinal coordinates) regarding many aspects of visual perception [
2,
3,
4,
5]. Specifically, the vertical hemifield has a dominant effect among the VFs compared with the horizontal hemifield [
6]. Moreover, during psychophysical experiments that require attentional resources in response to a change in the light source, pupil sensitivity to light is higher in the upper visual field (UVF) than in the lower visual field (LVF) [
7,
8,
9]. Additionally, objects located in the UVF are biased toward the extrapersonal region (for scene memory), whereas objects in the LVF are biased toward the peripersonal (PrP) region (for visual grasping) in 3D-spatial interactions. Other advantages of the LVF include better contrast sensitivity [
10], visual accuracy [
11], motion processing [
12,
13], and spatial resolution of attention and spatial frequency sensitivity [
14]. The LVF bias in processing information about an object is caused by the substantially higher number (60% more) of ganglion cells in the superior hemiretina than in the inferior hemiretina [
15], which results in an improved visual performance in the former.
VFs are also known to evoke different brightness perceptions. The perceptual brightness modulation is associated with cognitive factors such as memory and visual experience. This effect has been studied using pupillometry, with photographs and paintings of the sun as the stimuli. Binda et al. (2013) confirmed that sun photographs yielded a greater constriction of the pupils than did other stimuli despite physical equiluminant (i.e., squares with the same mean luminance as each sun photograph, phase-scrambled images of each sun photograph, and photographs of the moon) [
16]. Subsequently, Castellotti et al. (2020) discovered that paintings including a depiction of the sun produce greater pupil constriction than paintings that include a depiction of the moon or no depiction of a light source, despite having the same overall mean luminance [
17]. Recently, Istiqomah et al. (2022) reported that pupillary response to the image stimuli perceived as the sun yielded larger constricted pupils than those perceived as the moon under average luminance-controlled conditions [
18]. Their results indicated that perception has a dominant role rather than a mere physical luminance of the image stimuli due to the influence of ecological factors such as the existence of the sun. All of these studies demonstrate that pupillometry reflects not just the physical luminance (low-order cognition) but also the subjective brightness perception (higher-order cognition) in response to the stimuli. In addition, the previous study by Tortelli et al. (2022) confirmed that pupillary response was influenced by contextual information (such as from the sun’s images) considering the differences of inter-individual differences in the observer’s perception [
19].
Pupillometry is a metric used to measure pupil size in response to stimuli and may reflect various cognitive states. The initial change in pupil diameter is caused by the pupillary light reflex (PLR). However, the degree of change in pupil diameter is influenced by visual attention, visual processing, and the subjective interpretation of brightness. For example, Laeng and Endestad (2012) reported that a glare illusion conveyed brighter than its physical luminance induced greater constricted pupils [
20]. This glare illusion has a luminance gradient converged toward the pattern’s center that enhances the brightness intensely [
21,
22]. Furthermore, Laeng and Sulutvedt (2014) revealed that, owing to the response of the eyes to hazardous light (such as sunshine), the pupils considerably constricted when the participant imagined a sunny sky or the face of their mother under the sunlight [
23]. Other previous study by Mathôt et al. (2017) revealed that words conveying a sense of brightness yielded a greater constriction of pupils than those conveying a sense of darkness [
24]. These differences indicated the pupils’ response to a source that may damage the eyes despite only occurring in the observer’s imagination. In addition, Suzuki et al. (2019) revealed that the pupillary response to the blue glare illusion generated the largest pupil constrictions, reporting that blue is a dominant color in the human visual system in natural scenes (e.g., the blue sky) and indicating that, despite the average physical luminance of glare and control stimuli being identical, pupillary responses to the glare illusion reflect the subjective brightness perception [
22].
Recently, we demonstrated that the pupillary response to glare and halo stimuli differed depending on whether the stimuli were presented in the upper, lower, left, or right VFs by manipulating the retinal coordinates [
25]. We found that pupillary responses to the stimuli (glare and halo) in the UVF resulted in the largest pupil dilation and significantly reduced pupil dilation, specifically in response to the glare illusion due to higher-order cognition. The previous results reflect that the glare illusion was a dazzling light source (the sun) influencing the pupillary responses. However, our previous study and other studies regarding the subjective brightness perception analysis in the VFs (also mentioned in paragraph 3) raise the possibility that the differences in retinal coordinates and many opponent processes in the human visual system will affect the subjective brightness perception in the VFs. Therefore, clarifying whether there is anisotropy of subjective brightness perception by maintaining identical retinal coordinates and manipulating the world-centered coordinates could provide valuable insights into the anisotropy of subjective brightness perception in the world-centered coordinates based on pupillary responses to the glare illusion and halo stimuli. Particularly, this study aimed to elucidate whether there is an ecological advantage in five different locations in the world-centered coordinates based on pupillary responses to the glare illusion overtly that conveys a dazzling effect.
The difference between our previous and present studies is the visual input, which used the retinal coordinates manipulation in our previous study, and world-centered coordinates (formed by collecting information of the head-centered coordinates and vestibular input) manipulation in this work. To investigate the anisotropy of subjective brightness perception in the world-centered coordinates, we presented the glare and halo as stimuli in five different locations (top, bottom, left, right, and center) in the world-centered coordinates based on the pupillary responses to the stimuli (glare and halo) while the observers fixated on a fixation cross located in the middle of the stimulus. We used a virtual environment to easily control the physical luminance of the stimuli and the designated environment. In addition, the contextual cues of the 3D virtual environment provide more cues of features associated with the given tasks and advantages in decreasing the visual perception area; thus, the observers would perceive the stimuli easily [
26]. Furthermore, to form the world-centered coordinates, adding vestibular input to be combined with head-centered coordinates (retinal coordinates and eye position integration) is required. Therefore, we adopted an
active scene that instructed the observers to move their heads in accordance with the stimulus’ location in the world-centered coordinate as the vestibular input. To ensure that the present study’s results are not merely pupil size artifacts induced by the head movement during the
active scene, we manipulated the scene by automatically moving the virtual environment as the substance of the head movement in the
active scene, called the
passive scene, which did not allow the head movement during the stimulus presentation. In addition, we also applied glare as the stimuli and halo manipulation as the stimuli to find out whether there is any distinction between pupillary responses to the glare and halo stimuli, particularly, associated with ecological factors, as the representation of the sun [
22,
25], in five locations in the world-centered coordinates. In the present study, through an
active and
passive scene, we hypothesized that there is anisotropy in the pupillary responses in the world-centered coordinates; particularly, the results would generate the highest difference between pupillary responses to the glare (more constrict than halo) and halo stimuli at the top, and pupillary responses to the stimuli at the top would yield the highest degree of pupillary constriction as a consequence of ecological factors such as avoiding the dazzling effect of sunshine entering the retina.
3. Results
The main results of the present study are presented as the pupil size and
y-axis of eye gaze in response to the glare and halo stimuli for four seconds across the five locations in each scene. The time courses of the pupillary responses to each stimulus pattern (glare and halo), stimulus location (top, bottom, left, right, and center), and scene (
active and
passive) are illustrated in
Figure 3 (4-s exposure). We separated the pupil size data, based on the MPCL value), i.e., early and late components (
Figure 4).
(1) In the early component (
Figure 4, bottom), within the range of around 0.1 s before and after MPCL value, an rmANOVA of the pupillary response to the stimuli revealed very strong evidence for the presence of stimulus pattern (
F[1,17] = 58.899,
p < 0.001,
= 0.776, BF
M = 90.205) but not of the scene, location, and no interaction effect between the parameters (scene, stimulus pattern, and location) (
Table 1 and
Table 2).
(2) In the late component (the area under the curve [AUC]) (
Figure 4, top), defined as integral values of pupillary responses from MPCL value to the end of the stimulus presentation, three-way rmANOVA revealed strong evidence for the presence of a stimulus pattern (
F[1,17] = 12.437,
p = 0.003,
= 0.423, BF
M = 26.005), and a significant main effect on location (
F[2.944,50.044] = 3.469,
p = 0.023,
= 0.169, BF
M = 0.019) (
Table 3 and
Table 4). Nevertheless, the post hoc comparisons on location (from the classical frequentist), the Bayesian rmANOVA on location, and other conditions neither show a significant effect. Moreover, further investigation on the post hoc comparison of location from Bayesian analysis obtained moderate evidence only in pairs of top-bottom (
t[18] = 2.586,
p = 0.192, Cohen’s
d = 0.312, BF
10,U = 6.660) and bottom-left (
t[18] = −2.927,
p = 0.094, Cohen’s
d = −0.251, BF
10,U = 3.469). Additionally, we plotted the descriptive information of Bayesian rmANOVA (
Figure 5), and the results indicated that the pupillary response to the stimuli at the bottom location has the smallest mean of pupil size change in AUC compared with other conditions.
Finally, we conducted a three-way rmANOVA (5 locations × 2 stimulus patterns × 2 scenes) on the
y-axis of the eye gaze data to verify that the retinal coordinates were identical across the stimulus locations and patterns between the scenes. We found moderate evidence in favor of the stimulus patterns (
F[1,17] = 4.195,
p = 0.056,
= 0.198, BF
M = 6.845) (
Table 5 and
Table 6). However, there was neither evidence in the post hoc comparison of stimulus patterns in the Bayesian rmANOVA.
4. Discussion
Our previous study reported that the peripheral VFs (upper, lower, left, and right) in which the glare and halo stimuli were located influenced the subjective brightness perception of participants, as represented by the pupillary response to those stimuli [
25]. The UVF generated a greater pupil dilation in response to either stimulus than did the other VFs, and reduced pupil dilation in response to the glare illusion than that in response to the halo stimulus. The results were attributed to higher-order cognitive bias formed by statistical regularity in the processing of natural scenes. However, in our previous study’s results, it is possible that the differences in retinal coordinates would affect pupil size. The pupillary responses to the stimuli were influenced by pupil sensitivity, spatial resolution, and brightness perception (lower-order cognition) [
7,
14,
33]. Therefore, to further investigate subjective brightness perception, not only in the peripheral VFs (our previous study’s results), we conducted experiments through
active and
passive scenes by maintaining identical retinal coordinates and manipulating the world-centered coordinates, that is, by presenting the glare and halo as the stimuli in five different locations (top, bottom, left, right, and center) in the VR environment to investigate the anisotropy of subjective brightness perception in the world-centered coordinates. By manipulating the world-centered coordinates, we confirmed that the pupillary responses in each location differed despite the retinal coordinates being identical.
Furthermore, we divided the pupil size data into two components based on the MPCL values, that is, the early component, to evaluate the pupillary responses induced by the PLR around the area of 0.1 s before to after MPCL value, and the late component (the AUC), to access higher-order cognition (e.g., emotional arousal and subjective brightness perception) using Function 1 [
25,
28,
29].
(1) The early component. Our data provide very strong evidence for the presence of stimulus patterns (
F[1,17] = 58.899,
p < 0.001,
= 0.776, BF
M = 90.205). The significantly constricted pupil in response to the glare compared to halo stimuli reflects the enhancement of perceived brightness [
20]. In previous studies, the pupillary responses, especially during the PLR period, revealed the alteration of physical light intensity by means of lower-level visual processing [
21,
34]. The PLR is elicited by visual attention, visual processing and interpretation of the visual input [
34] and, possibly, higher-order cognitive involvement [
35]. Hence, the low-order cognition (enhancement of brightness perception) may affect the pupillary response in the early component, as evoked by the enhancement in brightness perception. However, the early component analysis in the present study was insufficient. It had not yet fulfilled the present work’s aim to elucidate whether there is an ecological advantage in the five different locations in the world-centered coordinates, which belong to high-level visual processing.
Therefore, we further investigated the pupillary response in the late component.
(2) Late component (AUC). The presence of stimulus pattern generated strong evidence (F[1,17] = 12.437, p = 0.003, = 0.423, BFM = 26.005) in the effect of stimuli’s physical light intensity entered the retina (low-order cognition) after the minimum peak of pupil response (MPCL). This evidence might be neither merely induced by the physical luminance of glare and halo stimuli, yet also indicated the complex visual processing.
Furthermore, our data show a significant main effect in location (
F[2.944,50.044] = 3.469,
p = 0.023,
= 0.169, BF
M = 0.019). We were further investigating the post hoc comparison of location from the classical frequentist rmANOVA, and there were no significant effects in any pairs of locations. In line with the previous study by Keysers et al. (2020), we used the Bayesian factor hypothesis to overcome the absence of evidence in the post hoc comparison of location from the classical frequentist rmANOVA [
36]. Considering the Bayesian factor hypothesis, the post hoc comparisons on location generated moderate evidence in the pairs of top-bottom (
t[18] = 2.586,
p = 0.192, Cohen’s
d = 0.312, BF
10,U = 6.660) and bottom-left(
t[18] = −2.927,
p = 0.094, Cohen’s
d = −0.251, BF
10,U = 3.469). Moreover, descriptive plots generated by JASP (
Figure 5) exhibit the smallest mean of pupil size change in response to the stimuli at the bottom. Contrary to our hypothesis that the pupil would be most constricted in response to the stimuli at the top, we demonstrated that the response to the stimuli at the bottom obtained a higher degree of pupil constriction than the stimuli at the top location.
The highest degree of pupil constriction produced by the pupillary response to the stimuli at the bottom was linked to one of four areas in the 3D-spatial interactions model theory proposed by Previc (1998) [
37]. One of those areas is the region in which a person can easily grasp items (such as edible objects for consumption), known as the PrP region. The PrP region has a lower field bias within a 2-m radius from the observer. Objects that have already been observed are processed in the PrP region. Furthermore, the PrP region in the virtual environment, especially as the first person (FP) without an extended part of the FP (as we did in the present work), is defined by the peripheral space of the FP. It will have a large field of visual perception compared to the extended PrP region and no visual obstacle [
38]. Therefore, visual processing (recognition and memorization) of objects in the PrP region requires minimal effort (an easier task for an observer’s eyes). The low demand for responses to stimuli presented at the bottom in world-centered coordinates resulted in a higher degree of pupil constriction than that in response to stimuli presented at the top. In addition, statistical analysis of pupil data in the present study revealed no significant main effect of the scene in either the early or late component. This result confirmed that the head movement did not affect the pupillary response during the stimulus onset.
Considered together, the complex visual processing induced by the glare and halo stimuli and the moderate evidence from the Bayesian factor, particularly in the pair of top-bottom locations, in the late component implies that the subjective brightness perception represented by the pupillary responses to the stimuli at the top in the world-centered coordinates might be influenced by the ecological factors. For instance, first, the ecological factor evoked by the glare and halo stimuli due to the glare illusion in the present study represents the sun [
22,
25]. Second, the stimuli at the top were perceived as darker than those at the bottom due to the cognitive bias related to the natural scenery where the bright blue sky is present [
22]. All the evidence in our study demonstrates anisotropy of subjective brightness perception among the five locations in the world-centered coordinates. These differences in subjective brightness perception occurred even though we applied the same stimulus luminance and the same retinal coordinates across the five locations due to extraretinal information tied to the ecological factors. Moreover, the
y-axis gaze angle did not seem to affect the pupil diameter, indicating identical retinal coordinates. For future studies, presenting different stimuli (e.g., the ambiguous sun and moon images) and asking the observer’s perception whether the stimuli perceived as the sun or moon should be conducted to fully segregate the low-order cognition involvement on pupillary response to the stimuli.
We have two limitations in the present study. First, the eye rotation during the experiment (foreshortening with gaze angle) may have influenced the pupil size measurements in this study owing to the HMD being integrated with cameras that are used to record eye movements. We attempted to minimize this limitation during the experiment by instructing the participants to fixate on the fixation cross. Furthermore, we rejected trials based on the fixation of the eye gaze. Second, we considered only the vertical field of world centered-coordinates due to the fact that we would elucidate whether the ecological factors (such as from the sun’s existence) affect the subjective brightness perception in the world-centered coordinates. Thus, we believe that the present study offers valuable insights into the anisotropy of subjective brightness perception among the five locations (top, bottom, left, right, and central) in the world-centered coordinates, especially to understand the extraretinal information influence on subjective brightness perception in the world-centered coordinates, as revealed by using the glare illusion, manipulating the world-centered coordinates in a VR environment, and performing pupillometry. In addition, the present study provides valuable insight into the ophthalmology field that the pupillary response is not affected by head movement.