Sensory and Working Memory: Stimulus Encoding, Storage, and Retrieval

A special issue of Vision (ISSN 2411-5150).

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 30366

Special Issue Editors


E-Mail Website
Guest Editor
Daniel Felix Ritchie School of Engineering & Computer Science, University of Denver, Denver, CO, USA
Interests: neuro-engineering; vision; visual psychophysics; visual memory; attention; computational neuroscience

E-Mail Website
Guest Editor
School of Optometry and Vision Science, Faculty of Life Sciences, University of Bradford, Bradford, West Yorkshire, England
Interests: visual perception; motion perception; attention and memory

Special Issue Information

Dear Colleagues,

Human memory consists of three main distinct stores: sensory memory, short-term/working-memory, and long-term memory. In the last decade, there have been extensive studies on working memory in terms of its capacity, encoding (slot versus resource), stages, neural correlates, and how information is stored as a function of stimulus conditions. The goal of this issue is to bring together the most recent work in these related areas to offer a contemporary synthesis of our understanding of these memory stages and their operation in a single issue.

Dr. Haluk Öǧmen
Dr. Srimant Tripathy
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Vision is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensory memory
  • iconic memory
  • working memory
  • short-term memory

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 7277 KiB  
Article
Capacity and Allocation across Sensory and Short-Term Memories
by Shaoying Wang, Srimant P. Tripathy and Haluk Öğmen
Vision 2022, 6(1), 15; https://doi.org/10.3390/vision6010015 - 1 Mar 2022
Cited by 1 | Viewed by 3885
Abstract
Human memory consists of sensory memory (SM), short-term memory (STM), and long-term memory (LTM). SM enables a large capacity, but decays rapidly. STM has limited capacity, but lasts longer. The traditional view of these memory systems resembles a leaky hourglass, the large top [...] Read more.
Human memory consists of sensory memory (SM), short-term memory (STM), and long-term memory (LTM). SM enables a large capacity, but decays rapidly. STM has limited capacity, but lasts longer. The traditional view of these memory systems resembles a leaky hourglass, the large top and bottom portions representing the large capacities of SM and LTM, whereas the narrow portion in the middle represents the limited capacity of STM. The “leak” in the top part of the hourglass depicts the rapid decay of the contents of SM. However, recently, it was shown that major bottlenecks for motion processing exist prior to STM, and the “leaky hourglass” model was replaced by a “leaky flask” model with a narrower top part to capture bottlenecks prior to STM. The leaky flask model was based on data from one study, and the first goal of the current paper was to test if the leaky flask model would generalize by using a different set of data. The second goal of the paper was to explore various block diagram models for memory systems and determine the one best supported by the data. We expressed these block diagram models in terms of statistical mixture models and, by using the Bayesian information criterion (BIC), found that a model with four components, viz., SM, attention, STM, and guessing, provided the best fit to our data. In summary, we generalized previous findings about early qualitative and quantitative bottlenecks, as expressed in the leaky flask model and showed that a four-process model can provide a good explanation for how visual information is processed and stored in memory. Full article
(This article belongs to the Special Issue Sensory and Working Memory: Stimulus Encoding, Storage, and Retrieval)
Show Figures

Figure 1

15 pages, 3968 KiB  
Article
Visual Memory Scan Slopes: Their Changes over the First Two Seconds of Processing
by Jane Jacob, Bruno G. Breitmeyer and Melissa Treviño
Vision 2021, 5(4), 53; https://doi.org/10.3390/vision5040053 - 4 Nov 2021
Viewed by 2239
Abstract
Using the prime–probe comparison paradigm, Jacob, Breitmeyer, and Treviño (2013) demonstrated that information processing in visual short-term memory (VSTM) proceeds through three stages: sensory visible persistence (SVP), nonvisible informational persistence (NIP), and visual working memory (VWM). To investigate the effect of increasing the [...] Read more.
Using the prime–probe comparison paradigm, Jacob, Breitmeyer, and Treviño (2013) demonstrated that information processing in visual short-term memory (VSTM) proceeds through three stages: sensory visible persistence (SVP), nonvisible informational persistence (NIP), and visual working memory (VWM). To investigate the effect of increasing the memory load on these stages by using 1, 3, and 5 display items, measures of VSTM performance, including storage, storage-slopes, and scan-slopes, were obtained. Results again revealed three stages of VSTM processing, but with the NIP stage increasing in duration as memory load increased, suggesting a need, during the NIP stage, for transfer and encoding delays of information into VWM. Consistent with this, VSTM scan-slopes, in ms/item, were lowest during the first NIP stage, highest during the second NIP stage, and intermediate during the third, non-sensory VWM stage. The results also demonstrated a color-superiority effect, as all VSTM scan-slopes for color were lower than those for shape and as all VSTM storages for color are greater than those for shape, and the existence of systematic pair-wise correlations between all three measures of VSTM performance. These findings and their implications are related to other paradigms and methods used to investigate post-stimulus processing in VSTM. Full article
(This article belongs to the Special Issue Sensory and Working Memory: Stimulus Encoding, Storage, and Retrieval)
Show Figures

Graphical abstract

13 pages, 697 KiB  
Article
The Sternberg Paradigm: Correcting Encoding Latencies in Visual and Auditory Test Designs
by Julian Klabes, Sebastian Babilon, Babak Zandi and Tran Quoc Khanh
Vision 2021, 5(2), 21; https://doi.org/10.3390/vision5020021 - 4 May 2021
Cited by 9 | Viewed by 7969
Abstract
The Sternberg task is a widely used tool for assessing the working memory performance in vision and cognitive science. It is possible to apply a visual or auditory variant of the Sternberg task to query the memory load. However, previous studies have shown [...] Read more.
The Sternberg task is a widely used tool for assessing the working memory performance in vision and cognitive science. It is possible to apply a visual or auditory variant of the Sternberg task to query the memory load. However, previous studies have shown that the subjects’ corresponding reaction times differ dependent on the used variant. In this work, we present an experimental approach that is intended to correct the reaction time differences observed between auditory and visual item presentation. We found that the subjects’ reaction time offset is related to the encoding speed of a single probe item. After correcting for these individual encoding latencies, differences in the results of both the auditory and visual Sternberg task become non-significant, p=0.252. Thus, an equal task difficulty can be concluded for both variants of item presentation. Full article
(This article belongs to the Special Issue Sensory and Working Memory: Stimulus Encoding, Storage, and Retrieval)
Show Figures

Figure 1

10 pages, 654 KiB  
Article
Effects of Audiovisual Memory Cues on Working Memory Recall
by Hilary C. Pearson and Jonathan M. P. Wilbiks
Vision 2021, 5(1), 14; https://doi.org/10.3390/vision5010014 - 19 Mar 2021
Cited by 6 | Viewed by 7965
Abstract
Previous studies have focused on topics such as multimodal integration and object discrimination, but there is limited research on the effect of multimodal learning in memory. Perceptual studies have shown facilitative effects of multimodal stimuli for learning; the current study aims to determine [...] Read more.
Previous studies have focused on topics such as multimodal integration and object discrimination, but there is limited research on the effect of multimodal learning in memory. Perceptual studies have shown facilitative effects of multimodal stimuli for learning; the current study aims to determine whether this effect persists with memory cues. The purpose of this study was to investigate the effect that audiovisual memory cues have on memory recall, as well as whether the use of multiple memory cues leads to higher recall. The goal was to orthogonally evaluate the effect of the number of self-generated memory cues (one or three), and the modality of the self-generated memory-cue (visual: written words, auditory: spoken words, or audiovisual). A recall task was administered where participants were presented with their self-generated memory cues and asked to determine the target word. There was a significant main effect for number of cues, but no main effect for modality. A secondary goal of this study was to determine which types of memory cues result in the highest recall. Self-reference cues resulted in the highest accuracy score. This study has applications to improving academic performance by using the most efficient learning techniques. Full article
(This article belongs to the Special Issue Sensory and Working Memory: Stimulus Encoding, Storage, and Retrieval)
Show Figures

Figure 1

Review

Jump to: Research

16 pages, 2705 KiB  
Review
Information Integration and Information Storage in Retinotopic and Non-Retinotopic Sensory Memory
by Haluk Öğmen and Michael H. Herzog
Vision 2021, 5(4), 61; https://doi.org/10.3390/vision5040061 - 13 Dec 2021
Viewed by 3192
Abstract
The first stage of the Atkinson–Shiffrin model of human memory is a sensory memory (SM). The visual component of the SM was shown to operate within a retinotopic reference frame. However, a retinotopic SM (rSM) is unable to account for vision [...] Read more.
The first stage of the Atkinson–Shiffrin model of human memory is a sensory memory (SM). The visual component of the SM was shown to operate within a retinotopic reference frame. However, a retinotopic SM (rSM) is unable to account for vision under natural viewing conditions because, for example, motion information needs to be analyzed across space and time. For this reason, the SM store of the Atkinson–Shiffrin model has been extended to include a non-retinotopic component (nrSM). In this paper, we analyze findings from two experimental paradigms and show drastically different properties of rSM and nrSM. We show that nrSM involves complex processes such as motion-based reference frames and Gestalt grouping, which establish object identities across space and time. We also describe a quantitative model for nrSM and show drastic differences between the spatio-temporal properties of rSM and nrSM. Since the reference-frame of the latter is non-retinotopic and motion-stream based, we suggest that the spatiotemporal properties of the nrSM are in accordance with the spatiotemporal properties of the motion system. Overall, these findings indicate that, unlike the traditional rSM, which is a relatively passive store, nrSM exhibits sophisticated processing properties to manage the complexities of ecological perception. Full article
(This article belongs to the Special Issue Sensory and Working Memory: Stimulus Encoding, Storage, and Retrieval)
Show Figures

Figure 1

12 pages, 260 KiB  
Review
The Short-Term Retention of Depth
by Adam Reeves and Jiehui Qian
Vision 2021, 5(4), 59; https://doi.org/10.3390/vision5040059 - 8 Dec 2021
Cited by 5 | Viewed by 2387
Abstract
We review research on the visual working memory for information portrayed by items arranged in depth (i.e., distance to the observer) within peri-personal space. Most items lose their metric depths within half a second, even though their identities and spatial positions are retained. [...] Read more.
We review research on the visual working memory for information portrayed by items arranged in depth (i.e., distance to the observer) within peri-personal space. Most items lose their metric depths within half a second, even though their identities and spatial positions are retained. The paradoxical loss of depth information may arise because visual working memory retains the depth of a single object for the purpose of actions such as pointing or grasping which usually apply to only one thing at a time. Full article
(This article belongs to the Special Issue Sensory and Working Memory: Stimulus Encoding, Storage, and Retrieval)
Back to TopTop