Next Issue
Volume 2, September
Previous Issue
Volume 2, March
 
 

Vision, Volume 2, Issue 2 (June 2018) – 9 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 984 KiB  
Article
Brightening and Dimming Aftereffects at Low and High Luminance
by Omar Hassan, Mark A. Georgeson and Stephen T. Hammett
Vision 2018, 2(2), 24; https://doi.org/10.3390/vision2020024 - 13 Jun 2018
Viewed by 3392
Abstract
Adaptation to a spatially uniform field that increases or decreases in luminance over time yields a “ramp aftereffect”, whereby a steady, uniform luminance appears to dim or brighten, and an appropriate non-uniform test field appears to move. We measured the duration of this [...] Read more.
Adaptation to a spatially uniform field that increases or decreases in luminance over time yields a “ramp aftereffect”, whereby a steady, uniform luminance appears to dim or brighten, and an appropriate non-uniform test field appears to move. We measured the duration of this aftereffect of adaptation to ascending and descending luminance for a wide range of temporal frequencies and luminance amplitudes. Three types of luminance ramp profiles were used: linear, logarithmic, and exponential. The duration of the motion aftereffect increased as amplitude increased, regardless of the frequency, slope, or ramp profile of the adapting pattern. At low luminance, this result held for ascending luminance adaptation, but the duration of the aftereffect was significantly reduced for descending luminance adaptation. This reduction in the duration of the aftereffect at low luminance is consistent with differential recruitment of temporally tuned cells of the ON and OFF pathways, but the relative independence of the effect from temporal frequency is not. Full article
Show Figures

Figure 1

14 pages, 842 KiB  
Article
Contextual Effects in Face Lightness Perception Are Not Expertise-Dependent
by Dorita H. F. Chang, Yin Yan Cheang and May So
Vision 2018, 2(2), 23; https://doi.org/10.3390/vision2020023 - 11 Jun 2018
Cited by 1 | Viewed by 3603
Abstract
Lightness judgments of face stimuli are context-dependent (i.e., judgments of face lightness are influenced by race classification). Here, we tested whether contextual effects in face lightness perception are modulated by expertise, exploiting well-known other race effects in face perception. We used a lightness-matching [...] Read more.
Lightness judgments of face stimuli are context-dependent (i.e., judgments of face lightness are influenced by race classification). Here, we tested whether contextual effects in face lightness perception are modulated by expertise, exploiting well-known other race effects in face perception. We used a lightness-matching paradigm where Chinese and White observers were asked to adjust the lightness of a variable face to match that of a standard face. The context (i.e., race category) of the two faces could be the same or different. Our data indicated that both groups had the smallest matching errors in same-context trials, for which errors did not vary across different racial categories. For cross-context trials, observers made the largest (negative) errors when the reference face was Black, as compared to Chinese and White references, for which matching errors were no different from zero or trended positively. Critically, this pattern was similar for both groups. We suggest that contextual influences in lightness perception are unlikely to be guided by classical mechanisms that drive face perception. We instead speculate that such influences manifest in terms of an interaction between race assumptions (e.g., expected surface reflectance patterns) and traditional mechanisms for reflectance computations. Full article
Show Figures

Figure 1

22 pages, 2705 KiB  
Article
Differentiating between Affine and Perspective-Based Models for the Geometry of Visual Space Based on Judgments of the Interior Angles of Squares
by Mark Wagner, Gary Hatfield, Kelly Cassese and Alexis N. Makwinski
Vision 2018, 2(2), 22; https://doi.org/10.3390/vision2020022 - 02 Jun 2018
Cited by 5 | Viewed by 3980
Abstract
This paper attempts to differentiate between two models of visual space. One model suggests that visual space is a simple affine transformation of physical space. The other proposes that it is a transformation of physical space via the laws of perspective. The present [...] Read more.
This paper attempts to differentiate between two models of visual space. One model suggests that visual space is a simple affine transformation of physical space. The other proposes that it is a transformation of physical space via the laws of perspective. The present paper reports two experiments in which participants are asked to judge the size of the interior angles of squares at five different distances from the participant. The perspective-based model predicts that the angles within each square on the side nearest to the participant should seem smaller than those on the far side. The simple affine model under our conditions predicts that the perceived size of the angles of each square should remain 90°. Results of both experiments were most consistent with the perspective-based model. The angles of each square on the near side were estimated to be significantly smaller than the angles on the far side for all five squares in both experiments. In addition, the sum of the estimated size of the four angles of each square declined with increasing distance from the participant to the square and was less than 360° for all but the nearest square. Full article
(This article belongs to the Special Issue The Perspective of Visual Space)
Show Figures

Figure 1

21 pages, 3379 KiB  
Article
Natural Perspective: Mapping Visual Space with Art and Science
by Alistair Burleigh, Robert Pepperell and Nicole Ruta
Vision 2018, 2(2), 21; https://doi.org/10.3390/vision2020021 - 07 May 2018
Cited by 10 | Viewed by 15167
Abstract
Following its discovery in fifteenth-century Italy, linear perspective has often been hailed as the most accurate method of projecting three-dimensional visual space onto a two-dimensional picture plane. However, when we survey the history of European art it is evident that few artists fully [...] Read more.
Following its discovery in fifteenth-century Italy, linear perspective has often been hailed as the most accurate method of projecting three-dimensional visual space onto a two-dimensional picture plane. However, when we survey the history of European art it is evident that few artists fully complied with its mathematical rules, despite many of them being rigorously trained in its procedures. In this paper, we will consider how artists have actually depicted visual space, and present evidence that images created according to a “natural” perspective (NP) used by artists are judged as better representations of visual space than those created using standard linear (LP) and curvilinear fisheye (FP) projective geometries. In this study, we built a real three-dimensional scene and produced photographs of the scene in three different perspectives (NP, LP and FP). An online experiment in which we asked people to rank the perspectives in order of preference showed a clear preference for NP compared to the FP and LP. In a second experiment, participants were asked to view the real scene and rate each perspective on a range of psychological variables. Results showed that NP was the most preferred and the most effective in depicting the physical space naturally. We discuss the implications of these results and the advantages and limitations of our approach for studying the global metric and geometrical structure of visual space. Full article
(This article belongs to the Special Issue The Perspective of Visual Space)
Show Figures

Figure 1

13 pages, 1671 KiB  
Article
Slant of a Surface Shifts Binocular Visual Direction
by Tsutomu Kusano and Koichi Shimono
Vision 2018, 2(2), 20; https://doi.org/10.3390/vision2020020 - 06 May 2018
Viewed by 3530
Abstract
We demonstrate how the slant of a surface affects the relative visual direction between binocular stimuli. In two experiments, we measured the visual direction of a binocular stimulus at different distances in the mid-sagittal plane or in the transverse plane at eye level [...] Read more.
We demonstrate how the slant of a surface affects the relative visual direction between binocular stimuli. In two experiments, we measured the visual direction of a binocular stimulus at different distances in the mid-sagittal plane or in the transverse plane at eye level relative to the center of the stimulus field. Experiment 1 showed that when a binocular stimulus (a vertical bar) was presented in front of or behind a surface slanted along the vertical center of the surface, its visual direction shifted toward the surface. Experiment 2 showed that when a binocular stimulus (a horizontal bar) was presented in front of or behind a surface slanted along the horizontal center of the surface, its visual direction also shifted toward the surface. These results indicate that the slant of a surface should be listed among the variables that contribute to the binocular visual direction, as well as the retinal loci of the stimulus, binocular eye position, the location of the visual egocenter, and stimulus properties. Full article
(This article belongs to the Special Issue Visual Direction)
Show Figures

Graphical abstract

15 pages, 12433 KiB  
Article
Image Stabilization in Central Vision Loss: The Horizontal Vestibulo-Ocular Reflex
by Esther G. González, Runjie Shi, Luminita Tarita-Nistor, Efrem D. Mandelcorn, Mark S. Mandelcorn and Martin J. Steinbach
Vision 2018, 2(2), 19; https://doi.org/10.3390/vision2020019 - 13 Apr 2018
Cited by 4 | Viewed by 4521
Abstract
For patients with central vision loss and controls with normal vision, we examined the horizontal vestibulo-ocular reflex (VOR) in complete darkness and in the light when enhanced by vision (VVOR). We expected that the visual-vestibular interaction during VVOR would produce an asymmetry in [...] Read more.
For patients with central vision loss and controls with normal vision, we examined the horizontal vestibulo-ocular reflex (VOR) in complete darkness and in the light when enhanced by vision (VVOR). We expected that the visual-vestibular interaction during VVOR would produce an asymmetry in the gain due to the location of the preferred retinal locus (PRL) of the patients. In the dark, we hypothesized that the VOR would not be affected by the loss of central vision. Nine patients (ages 67 to 92 years) and 17 controls (ages 16 to 81 years) were tested in 10-s active VVOR and VOR procedures at a constant frequency of 0.5 Hz while their eyes and head movements were recorded with a video-based binocular eye tracker. We computed the gain by analyzing the eye and head peak velocities produced during the intervals between saccades. In the light and in darkness, a significant proportion of patients showed larger leftward than rightward peak velocities, consistent with a PRL to the left of the scotoma. No asymmetries were found for the controls. These data support the notion that, after central vision loss, the preferred retinal locus (PRL) in eccentric vision becomes the centre of visual direction, even in the dark. Full article
(This article belongs to the Special Issue Age-Related Macular Degeneration)
Show Figures

Figure 1

11 pages, 959 KiB  
Article
Changes in Tonic Alertness but Not Voluntary Temporal Preparation Modulate the Attention Elicited by Task-Relevant Gaze and Arrow Cues
by Dana A. Hayward and Jelena Ristic
Vision 2018, 2(2), 18; https://doi.org/10.3390/vision2020018 - 07 Apr 2018
Cited by 4 | Viewed by 3859
Abstract
Attention is engaged differently depending on the type and utility of an attentional cue. Some cues like visual transients or social gaze engage attention effortlessly. Others like symbols or geometric shapes require task-relevant deliberate processing. In the laboratory, these effects are often measured [...] Read more.
Attention is engaged differently depending on the type and utility of an attentional cue. Some cues like visual transients or social gaze engage attention effortlessly. Others like symbols or geometric shapes require task-relevant deliberate processing. In the laboratory, these effects are often measured using a cuing procedure, which typically manipulates cue type and its utility for the task. Recent research however has uncovered that in addition to spatial orienting, this popular paradigm also engages two additional processes—tonic alertness and voluntary temporal preparation—both of which have been found to modulate spatial orienting elicited by task-irrelevant cues but not task-relevant symbols. Here we assessed whether changes in tonic alertness and voluntary temporal preparation also modulated attentional orienting elicited by task-relevant social gaze and nonsocial arrow cues. Our results indicated that while the effects of spatial attention were reliable in all conditions and did not vary with cue type, the magnitude of orienting was larger under high tonic alertness. Thus, while the cue’s task utility appears to have the power to robustly drive attentional orienting, changes in tonic alertness may modulate the magnitude of such deliberate shifts of attention elicited by task-relevant central social and nonsocial cues. Full article
(This article belongs to the Special Issue Reflexive Shifts in Visual Attention)
Show Figures

Figure 1

10 pages, 4353 KiB  
Article
Differential Angular Expansion in Perceived Direction in Azimuth and Elevation Are Yoked to the Presence of a Perceived Ground Plane
by Frank H. Durgin and Umi I. Keezing
Vision 2018, 2(2), 17; https://doi.org/10.3390/vision2020017 - 24 Mar 2018
Cited by 2 | Viewed by 3645
Abstract
It has been proposed that perceived angular direction relative to straight-ahead is exaggerated in perception, and that this exaggeration is greater in elevation (or declination) than in azimuth. Prior research has suggested that exaggerations in elevation may be tied to the presence of [...] Read more.
It has been proposed that perceived angular direction relative to straight-ahead is exaggerated in perception, and that this exaggeration is greater in elevation (or declination) than in azimuth. Prior research has suggested that exaggerations in elevation may be tied to the presence of a visual ground plane, but there have been mixed results across studies using different methods of dissociation. In the present study, virtual environments were used to dissociate visual from gravitational upright while human participants (N = 128) made explicit angular direction judgments relative to straight ahead. Across these experimental manipulations, observers were positioned either upright (Experiments 1A and 1B) or sideways (Experiment 2), so as to additionally dissociate bodily orientation from gravitational orientation. In conditions in which a virtual environment was perceived as containing a level ground plane, large-scale exaggerations consistent with the visually-specified orientation of the ground plane were observed. In the absence of the perception of a level ground plane, angular exaggerations were relatively small. The ground plane serves as an important reference frame for angular expansion in the perceived visual direction. Full article
(This article belongs to the Special Issue Visual Direction)
Show Figures

Graphical abstract

10 pages, 1655 KiB  
Article
Beyond the Vestibulo-Ocular Reflex: Vestibular Input is Processed Centrally to Achieve Visual Stability
by Edwin S. Dalmaijer
Vision 2018, 2(2), 16; https://doi.org/10.3390/vision2020016 - 21 Mar 2018
Cited by 1 | Viewed by 6049
Abstract
The current study presents a re-analysis of data from Zink et al. (1998, Electroencephalography and Clinical Neurophysiology, 107), who administered galvanic vestibular stimulation through unipolar direct current. They placed electrodes on each mastoid and applied either right or left anodal stimulation. [...] Read more.
The current study presents a re-analysis of data from Zink et al. (1998, Electroencephalography and Clinical Neurophysiology, 107), who administered galvanic vestibular stimulation through unipolar direct current. They placed electrodes on each mastoid and applied either right or left anodal stimulation. Ocular torsion and visual tilt were measured under different stimulation intensities. New modelling introduced here demonstrates that directly proportional linear models fit reasonably well with the relationship between vestibular input and visual tilt, but not to that between vestibular input and ocular torsion. Instead, an exponential model characterised by a decreasing slope and an asymptote fitted best. These results demonstrate that in the results presented by Zink et al. (1998), ocular torsion could not completely account for visual tilt. This suggests that vestibular input is processed centrally to stabilise vision when ocular torsion is insufficient. Potential mechanisms and seemingly conflicting literature are discussed. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop