Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (123)

Search Parameters:
Keywords = auditory lateralization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1171 KB  
Article
Is Pupil Response to Speech and Music in Toddlers with Cochlear Implants Asymmetric?
by Amanda Saksida, Marta Fantoni, Sara Ghiselli and Eva Orzan
Audiol. Res. 2025, 15(4), 108; https://doi.org/10.3390/audiolres15040108 - 14 Aug 2025
Viewed by 234
Abstract
Background: Ear advantage (EA) reflects hemispheric asymmetries in auditory processing. While a right-ear advantage (REA) for speech and a left-ear advantage (LEA) for music are well documented in typically developing individuals, it is unclear how these patterns manifest in young children with cochlear [...] Read more.
Background: Ear advantage (EA) reflects hemispheric asymmetries in auditory processing. While a right-ear advantage (REA) for speech and a left-ear advantage (LEA) for music are well documented in typically developing individuals, it is unclear how these patterns manifest in young children with cochlear implants (CIs). This study investigated whether pupillometry could reveal asymmetric listening efforts in toddlers with bilateral CIs when listening to speech and music under monaural stimulation. Methods: Thirteen toddlers (mean age = 36.2 months) with early bilateral CIs participated. Pupillary responses were recorded during passive listening to speech and music stimuli, presented in quiet or with background noise. Each child was tested twice, once with only the left CI active and once with only the right CI active. Linear mixed-effects models assessed the influence of session (left/right CI), signal type (speech/music), and background noise. Results: A significant interaction between session and signal type was observed (p = 0.047). Speech elicited larger pupil sizes when processed through the left CI, while music showed no significant lateralized effects. Age and speech therapy frequency moderated pupil responses in speech and music trials, respectively. Conclusions: Pupillometry reveals subtle asymmetric listening effort in young CI users depending on the listening ear, suggesting early emerging functional lateralization despite sensory deprivation and device-mediated hearing. Full article
(This article belongs to the Section Hearing)
Show Figures

Figure 1

16 pages, 1481 KB  
Article
Effects of Underwater Noise Exposure on Early Development in Zebrafish
by Tong Zhou, Yuchi Duan, Ya Li, Wei Yang and Qiliang Chen
Animals 2025, 15(15), 2310; https://doi.org/10.3390/ani15152310 - 7 Aug 2025
Viewed by 365
Abstract
Anthropogenic noise pollution is a significant global environmental issue that adversely affects the behavior, physiology, and auditory functions of aquatic species. However, studies on the effects of underwater noise on early developmental stages of fish remain scarce, particularly regarding the differential impacts of [...] Read more.
Anthropogenic noise pollution is a significant global environmental issue that adversely affects the behavior, physiology, and auditory functions of aquatic species. However, studies on the effects of underwater noise on early developmental stages of fish remain scarce, particularly regarding the differential impacts of daytime versus nighttime noise exposure. In this study, zebrafish (Danio rerio) embryos were exposed to control group (no additional noise), daytime noise (100–1000 Hz, 130 dB, from 08:00 to 20:00) or nighttime noise (100–1000 Hz, 130 dB, from 20:00 to 08:00) for 5 days, and their embryonic development and oxidative stress levels were analyzed. Compared to the control group, the results indicated that exposure to both daytime and nighttime noise led to delays in embryo hatching time and a significant decrease in larval heart rate. Notably, exposure to nighttime noise significantly increased the larval deformity rate. Noise exposure, particularly at night, elevated the activities of catalase (CAT) and glutathione peroxidase (GPX), as well as the concentration of malondialdehyde (MDA), accompanied by upregulation of antioxidant-related gene expression levels. Nighttime noise exposure significantly increased the abnormality rate of otolith development in larvae and markedly downregulated the expression levels of otop1 related to otolith development regulation, while daytime noise exposure only induced a slight increase in the otolith abnormality rate. After noise exposure, the number of lateral neuromasts in larvae decreased slightly, yet genes (slc17a8 and capgb) related to hair cell development were significantly upregulated. Overall, this study demonstrates that both daytime and nighttime noise can induce oxidative stress and impair embryonic development of zebrafish, with nighttime noise causing more severe damage. Full article
(This article belongs to the Section Animal Physiology)
Show Figures

Figure 1

17 pages, 2799 KB  
Article
The Phenomenology of Offline Perception: Multisensory Profiles of Voluntary Mental Imagery and Dream Imagery
by Maren Bilzer and Merlin Monzel
Vision 2025, 9(2), 37; https://doi.org/10.3390/vision9020037 - 21 Apr 2025
Cited by 1 | Viewed by 1578
Abstract
Both voluntary mental imagery and dream imagery involve multisensory representations without externally present stimuli that can be categorized as offline perceptions. Due to common mechanisms, correlations between multisensory dream imagery profiles and multisensory voluntary mental imagery profiles were hypothesized. In a sample of [...] Read more.
Both voluntary mental imagery and dream imagery involve multisensory representations without externally present stimuli that can be categorized as offline perceptions. Due to common mechanisms, correlations between multisensory dream imagery profiles and multisensory voluntary mental imagery profiles were hypothesized. In a sample of 226 participants, correlations within the respective state of consciousness were significantly bigger than across, favouring two distinct networks. However, the association between the vividness of voluntary mental imagery and vividness of dream imagery was moderated by the frequency of dream recall and lucid dreaming, suggesting that both networks become increasingly similar when higher metacognition is involved. Additionally, the vividness of emotional and visual imagery was significantly higher for dream imagery than for voluntary mental imagery, reflecting the immersive nature of dreams and the continuity of visual dominance while being awake and asleep. In contrast, the vividness of auditory, olfactory, gustatory, and tactile imagery was higher for voluntary mental imagery, probably due to higher cognitive control while being awake. Most results were replicated four weeks later, weakening the notion of state influences. Overall, our results indicate similarities between dream imagery and voluntary mental imagery that justify a common classification as offline perception, but also highlight important differences. Full article
(This article belongs to the Special Issue Visual Mental Imagery System: How We Image the World)
Show Figures

Figure 1

19 pages, 1537 KB  
Review
Repulsive Guidance Molecule-A as a Therapeutic Target Across Neurological Disorders: An Update
by Vasilis-Spyridon Tseriotis, Andreas Liampas, Irene Zacharo Lazaridou, Sofia Karachrysafi, George D. Vavougios, Georgios M. Hadjigeorgiou, Theodora Papamitsou, Dimitrios Kouvelas, Marianthi Arnaoutoglou, Chryssa Pourzitaki and Theodoros Mavridis
Int. J. Mol. Sci. 2025, 26(7), 3221; https://doi.org/10.3390/ijms26073221 - 30 Mar 2025
Cited by 5 | Viewed by 2106
Abstract
Repulsive guidance molecule-a (RGMa) has emerged as a significant therapeutic target in a variety of neurological disorders, including neurodegenerative diseases and acute conditions. This review comprehensively examines the multifaceted role of RGMa in central nervous system (CNS) pathologies such as Alzheimer’s disease, Parkinson’s [...] Read more.
Repulsive guidance molecule-a (RGMa) has emerged as a significant therapeutic target in a variety of neurological disorders, including neurodegenerative diseases and acute conditions. This review comprehensively examines the multifaceted role of RGMa in central nervous system (CNS) pathologies such as Alzheimer’s disease, Parkinson’s disease, amyotrophic lateral sclerosis, multiple sclerosis, neuromyelitis optica spectrum disorder, spinal cord injury, stroke, vascular dementia, auditory neuropathy, and epilepsy. The mechanisms through which RGMa contributes to neuroinflammation, neuronal degeneration, and impaired axonal regeneration are herein discussed. Evidence from preclinical studies associate RGMa overexpression with negative outcomes, such as increased neuroinflammation and synaptic loss, while RGMa inhibition, particularly the use of agents like elezanumab, has shown promise in enhancing neuronal survival and functional recovery. RGMa’s responses concerning immunomodulation and neurogenesis highlight its potential as a therapeutic avenue. We emphasize RGMa’s critical role in CNS pathology and its potential to pave the way for innovative treatment strategies in neurological disorders. While preclinical findings are encouraging so far, further clinical trials are needed to validate the safety and efficacy of RGMa-targeted therapies. Full article
Show Figures

Figure 1

15 pages, 4108 KB  
Article
Vocal Emotion Perception and Musicality—Insights from EEG Decoding
by Johannes M. Lehnen, Stefan R. Schweinberger and Christine Nussbaum
Sensors 2025, 25(6), 1669; https://doi.org/10.3390/s25061669 - 8 Mar 2025
Viewed by 1138
Abstract
Musicians have an advantage in recognizing vocal emotions compared to non-musicians, a performance advantage often attributed to enhanced early auditory sensitivity to pitch. Yet a previous ERP study only detected group differences from 500 ms onward, suggesting that conventional ERP analyses might not [...] Read more.
Musicians have an advantage in recognizing vocal emotions compared to non-musicians, a performance advantage often attributed to enhanced early auditory sensitivity to pitch. Yet a previous ERP study only detected group differences from 500 ms onward, suggesting that conventional ERP analyses might not be sensitive enough to detect early neural effects. To address this, we re-analyzed EEG data from 38 musicians and 39 non-musicians engaged in a vocal emotion perception task. Stimuli were generated using parameter-specific voice morphing to preserve emotional cues in either the pitch contour (F0) or timbre. By employing a neural decoding framework with a Linear Discriminant Analysis classifier, we tracked the evolution of emotion representations over time in the EEG signal. Converging with the previous ERP study, our findings reveal that musicians—but not non-musicians—exhibited significant emotion decoding between 500 and 900 ms after stimulus onset, a pattern observed for F0-Morphs only. These results suggest that musicians’ superior vocal emotion recognition arises from more effective integration of pitch information during later processing stages rather than from enhanced early sensory encoding. Our study also demonstrates the potential of neural decoding approaches using EEG brain activity as a biological sensor for unraveling the temporal dynamics of voice perception. Full article
(This article belongs to the Special Issue Sensing Technologies in Neuroscience and Brain Research)
Show Figures

Figure 1

12 pages, 1046 KB  
Article
Assessing the Recognition of Social Interactions Through Body Motion in the Routine Care of Patients with Post-Lingual Sensorineural Hearing Loss
by Cordélia Fauvet, Léa Cantini, Aude-Eva Chaudoreille, Elisa Cancian, Barbara Bonnel, Chloé Sérignac, Alexandre Derreumaux, Philippe Robert, Nicolas Guevara, Auriane Gros and Valeria Manera
J. Clin. Med. 2025, 14(5), 1604; https://doi.org/10.3390/jcm14051604 - 27 Feb 2025
Viewed by 508
Abstract
Background: Body motion significantly contributes to understanding communicative and social interactions, especially when auditory information is impaired. The visual skills of people with hearing loss are often enhanced and compensate for some of the missing auditory information. In the present study, we investigated [...] Read more.
Background: Body motion significantly contributes to understanding communicative and social interactions, especially when auditory information is impaired. The visual skills of people with hearing loss are often enhanced and compensate for some of the missing auditory information. In the present study, we investigated the recognition of social interactions by observing body motion in people with post-lingual sensorineural hearing loss (SNHL). Methods: In total, 38 participants with post-lingual SNHL and 38 matched normally hearing individuals (NHIs) were presented with point-light stimuli of two agents who were either engaged in a communicative interaction or acting independently. They were asked to classify the actions as communicative vs. independent and to select the correct action description. Results: No significant differences were found between the participants with SNHL and the NHIs when classifying the actions. However, the participants with SNHL showed significantly lower performance compared with the NHIs in the description task due to a higher tendency to misinterpret communicative stimuli. In addition, acquired SNHL was associated with a significantly higher number of errors, with a tendency to over-interpret independent stimuli as communicative and to misinterpret communicative actions. Conclusions: The findings of this study suggest a misinterpretation of visual understanding of social interactions in individuals with SNHL and over-interpretation of communicative intentions in SNHL acquired later in life. Full article
Show Figures

Figure 1

22 pages, 1826 KB  
Article
Visual Cortical Processing in Children with Early Bilateral Cochlear Implants: A VEP Analysis
by Ola Badarni-Zahalka, Ornella Dakwar-Kawar, Cahtia Adelman, Salma Khoury-Shoufani and Josef Attias
Children 2025, 12(3), 278; https://doi.org/10.3390/children12030278 - 25 Feb 2025
Viewed by 1144
Abstract
Background/Objectives: Cochlear implantation is the primary treatment for severe-to-profound hearing loss, yet outcomes vary significantly among recipients. While visual–auditory cross-modal reorganization has been identified as a contributing factor to this variability, its impact in early-implanted children remains unclear. To address this knowledge gap, [...] Read more.
Background/Objectives: Cochlear implantation is the primary treatment for severe-to-profound hearing loss, yet outcomes vary significantly among recipients. While visual–auditory cross-modal reorganization has been identified as a contributing factor to this variability, its impact in early-implanted children remains unclear. To address this knowledge gap, we investigated visual processing and its relationship with auditory outcomes in children who received early bilateral cochlear implants. Methods: To examine potential cross-modal reorganization, we recorded visual evoked potentials (VEPs) in response to pattern-reversal stimuli in 25 children with cochlear implants (CIs) (mean implantation age: 1.44 years) and 28 age-matched normal-hearing (NH) controls. Analysis focused on both the occipital region of interest (ROI: O1, OZ, and O2 electrode sites) and right temporal ROI, examining VEP components and their correlation with speech perception outcomes. Results: Unlike previous studies in later-implanted children, the overall occipital ROI showed no significant differences between groups. However, the left occipital electrode (O1) revealed reduced P1 amplitudes and delayed N1 latencies in CI users. Importantly, O1 N1 latency negatively correlated with speech-in-noise performance (r = −0.318; p = 0.02). The right temporal region showed no significant differences in VEP N1 between groups and no correlation with speech performance in CI users. Conclusions: Early bilateral cochlear implantation appears to preserve global visual processing, suggesting minimal maladaptive reorganization. However, subtle alterations in left occipital visual processing may influence auditory outcomes, highlighting the importance of early intervention and the complex nature of sensory integration in this population. Full article
Show Figures

Figure 1

13 pages, 2429 KB  
Article
Electrophysiological Variations in Auditory Potentials in Chronic Tinnitus Individuals: Treatment Response and Tinnitus Laterality
by Ourania Manta, Dimitris Kikidis, Winfried Schlee, Berthold Langguth, Birgit Mazurek, Jose A. Lopez-Escamez, Juan Martin-Lagos, Rilana Cima, Konstantinos Bromis, Eleftheria Vellidou, Zoi Zachou, Nikos Markatos, Evgenia Vassou, Ioannis Kouris, George K. Matsopoulos and Dimitrios D. Koutsouris
J. Clin. Med. 2025, 14(3), 760; https://doi.org/10.3390/jcm14030760 - 24 Jan 2025
Viewed by 1159
Abstract
Background: This study investigates electrophysiological distinctions in auditory evoked potentials (AEPs) among individuals with chronic subjective tinnitus, with a specific focus on the impact of treatment response and tinnitus localisation. Methods: Early AEPs, known as Auditory Brainstem Responses (ABR), and middle [...] Read more.
Background: This study investigates electrophysiological distinctions in auditory evoked potentials (AEPs) among individuals with chronic subjective tinnitus, with a specific focus on the impact of treatment response and tinnitus localisation. Methods: Early AEPs, known as Auditory Brainstem Responses (ABR), and middle AEPs, termed Auditory Middle Latency Responses (AMLR), were analysed in tinnitus patients across four clinical centers in an attempt to verify increased neuronal activity, in accordance with the current tinnitus models. Our statistical analyses primarily focused on discrepancies in time–domain core features of ABR and AMLR signals, including amplitudes and latencies, concerning both treatment response and tinnitus laterality. Results: Statistically significant differences were observed in ABR wave III and V latencies, ABR wave III peak amplitude, and AMLR wave Na and Nb amplitudes when comparing groups based on their response to treatment, accompanied by varying effect sizes. Conversely, when examining groups categorised by tinnitus laterality, no statistically significant differences emerged. Conclusions: These results provide valuable insights into the potential influence of treatment responses on AEPs. However, further research is imperative to attain a comprehensive understanding of the underlying mechanisms at play. Full article
(This article belongs to the Section Otolaryngology)
Show Figures

Figure 1

13 pages, 1770 KB  
Article
Exploring Musical Feedback for Gait Retraining: A Novel Approach to Orthopedic Rehabilitation
by Luisa Cedin, Christopher Knowlton and Markus A. Wimmer
Healthcare 2025, 13(2), 144; https://doi.org/10.3390/healthcare13020144 - 14 Jan 2025
Viewed by 1675
Abstract
Background/Objectives: Gait retraining is widely used in orthopedic rehabilitation to address abnormal movement patterns. However, retaining walking modifications can be challenging without guidance from physical therapists. Real-time auditory biofeedback can help patients learn and maintain gait alterations. This study piloted the feasibility of [...] Read more.
Background/Objectives: Gait retraining is widely used in orthopedic rehabilitation to address abnormal movement patterns. However, retaining walking modifications can be challenging without guidance from physical therapists. Real-time auditory biofeedback can help patients learn and maintain gait alterations. This study piloted the feasibility of the musification of feedback to medialize the center of pressure (COP). Methods: To provide musical feedback, COP and plantar pressure were captured in real time at 100 Hz from a wireless 16-sensor pressure insole. Twenty healthy subjects (29 ± 5 years old, 75.9 ± 10.5 Kg, 1.73 ± 0.07 m) were recruited to walk using this system and were further analyzed via marker-based motion capture. A lowpass filter muffled a pre-selected music playlist when the real-time center of pressure exceeded a predetermined lateral threshold. The only instruction participants received was to adjust their walking to avoid the muffling of the music. Results: All participants significantly medialized their COP (−9.38% ± 4.37, range −2.3% to −19%), guided solely by musical feedback. Participants were still able to reproduce this new walking pattern when the musical feedback was removed. Importantly, no significant changes in cadence or walking speed were observed. The results from a survey showed that subjects enjoyed using the system and suggested that they would adopt such a system for rehabilitation. Conclusions: This study highlights the potential of musical feedback for orthopedic rehabilitation. In the future, a portable system will allow patients to train at home, while clinicians could track their progress remotely through cloud-enabled telemetric health data monitoring. Full article
(This article belongs to the Special Issue 2nd Edition of the Expanding Scope of Music in Healthcare)
Show Figures

Figure 1

13 pages, 2639 KB  
Article
Functional Connectivity Biomarker Extraction for Schizophrenia Based on Energy Landscape Machine Learning Techniques
by Janerra D. Allen, Sravani Varanasi, Fei Han, L. Elliot Hong and Fow-Sen Choa
Sensors 2024, 24(23), 7742; https://doi.org/10.3390/s24237742 - 4 Dec 2024
Cited by 2 | Viewed by 1673
Abstract
Brain connectivity represents the functional organization of the brain, which is an important indicator for evaluating neuropsychiatric disorders and treatment effects. Schizophrenia is associated with impaired functional connectivity but characterizing the complex abnormality patterns has been challenging. In this work, we used resting-state [...] Read more.
Brain connectivity represents the functional organization of the brain, which is an important indicator for evaluating neuropsychiatric disorders and treatment effects. Schizophrenia is associated with impaired functional connectivity but characterizing the complex abnormality patterns has been challenging. In this work, we used resting-state functional magnetic resonance imaging (fMRI) data to measure functional connectivity between 55 schizophrenia patients and 63 healthy controls across 246 regions of interest (ROIs) and extracted the disease-related connectivity patterns using energy landscape (EL) analysis. EL analysis captures the complexity of brain function in schizophrenia by focusing on functional brain state stability and region-specific dynamics. Age, sex, and smoker demographics between patients and controls were not significantly different. However, significant patient and control differences were found for the brief psychiatric rating scale (BPRS), auditory perceptual trait and state (APTS), visual perceptual trait and state (VPTS), working memory score, and processing speed score. We found that the brains of individuals with schizophrenia have abnormal energy landscape patterns between the right and left rostral lingual gyrus, and between the left lateral and orbital area in 12/47 regions. The results demonstrate the potential of the proposed imaging analysis workflow to identify potential connectivity biomarkers by indexing specific clinical features in schizophrenia patients. Full article
Show Figures

Figure 1

18 pages, 316 KB  
Review
Auditory and Vestibular Involvement in Congenital Cytomegalovirus Infection
by Swetha G. Pinninti, William J. Britt and Suresh B. Boppana
Pathogens 2024, 13(11), 1019; https://doi.org/10.3390/pathogens13111019 - 20 Nov 2024
Cited by 4 | Viewed by 2006
Abstract
Congenital cytomegalovirus infection (cCMV) is a frequent cause of non-hereditary sensorineural hearing loss (SNHL) and developmental disabilities. The contribution of cCMV to childhood hearing loss has been estimated to be about 25% of all hearing loss in children at 4 years of age. [...] Read more.
Congenital cytomegalovirus infection (cCMV) is a frequent cause of non-hereditary sensorineural hearing loss (SNHL) and developmental disabilities. The contribution of cCMV to childhood hearing loss has been estimated to be about 25% of all hearing loss in children at 4 years of age. Although the vestibular insufficiency (VI) in cCMV has not been well-characterized and therefore, underestimated, recent studies suggest that VI is also frequent in children with cCMV and can lead to adverse neurodevelopmental outcomes. The pathogenesis of SNHL and VI in children with cCMV has been thought to be from direct viral cytopathic effects as well as local inflammatory responses playing a role. Hearing loss in cCMV can be of varying degrees of severity, unilateral or bilateral, present at birth or develop later (late-onset), and can progress or fluctuate in early childhood. Therefore, newborn hearing screening fails to identify a significant number of children with CMV-related SNHL. Although the natural history of cCMV-associated VI has not been well characterized, recent data suggests that it is likely that VI also varies considerably with respect to the laterality, timing of onset, degree of the deficit, and continued deterioration during early childhood. This article summarizes the current understanding of the natural history and pathogenesis of auditory and vestibular disorders in children with cCMV. Full article
16 pages, 5798 KB  
Article
Voice Assessment in Patients with Amyotrophic Lateral Sclerosis: An Exploratory Study on Associations with Bulbar and Respiratory Function
by Pedro Santos Rocha, Nuno Bento, Hanna Svärd, Diana Monteiro Lopes, Sandra Hespanhol, Duarte Folgado, André Valério Carreiro, Mamede de Carvalho and Bruno Miranda
Brain Sci. 2024, 14(11), 1082; https://doi.org/10.3390/brainsci14111082 - 29 Oct 2024
Viewed by 1516
Abstract
Background: Speech production is a possible way to monitor bulbar and respiratory functions in patients with amyotrophic lateral sclerosis (ALS). Moreover, the emergence of smartphone-based data collection offers a promising approach to reduce frequent hospital visits and enhance patient outcomes. Here, we studied [...] Read more.
Background: Speech production is a possible way to monitor bulbar and respiratory functions in patients with amyotrophic lateral sclerosis (ALS). Moreover, the emergence of smartphone-based data collection offers a promising approach to reduce frequent hospital visits and enhance patient outcomes. Here, we studied the relationship between bulbar and respiratory functions with voice characteristics of ALS patients, alongside a speech therapist’s evaluation, at the convenience of using a simple smartphone. Methods: For voice assessment, we considered a speech therapist’s standardized tool—consensus auditory-perceptual evaluation of voice (CAPE-V); and an acoustic analysis toolbox. The bulbar sub-score of the revised ALS functional rating scale (ALSFRS-R) was used, and pulmonary function measurements included forced vital capacity (FVC%), maximum expiratory pressure (MEP%), and maximum inspiratory pressure (MIP%). Correlation coefficients and both linear and logistic regression models were applied. Results: A total of 27 ALS patients (12 males; 61 years mean age; 28 months median disease duration) were included. Patients with significant bulbar dysfunction revealed greater CAPE-V scores in overall severity, roughness, strain, pitch, and loudness. They also presented slower speaking rates, longer pauses, and higher jitter values in acoustic analysis (all p < 0.05). The CAPE-V’s overall severity and sub-scores for pitch and loudness demonstrated significant correlations with MIP% and MEP% (all p < 0.05). In contrast, acoustic metrics (speaking rate, absolute energy, shimmer, and harmonic-to-noise ratio) significantly correlated with FVC% (all p < 0.05). Conclusions: The results provide supporting evidence for the use of smartphone-based recordings in ALS patients for CAPE-V and acoustic analysis as reliable correlates of bulbar and respiratory function. Full article
Show Figures

Figure 1

17 pages, 1968 KB  
Article
A Dual Role for the Dorsolateral Prefrontal Cortex (DLPFC) in Auditory Deviance Detection
by Manon E. Jaquerod, Ramisha S. Knight, Alessandra Lintas and Alessandro E. P. Villa
Brain Sci. 2024, 14(10), 994; https://doi.org/10.3390/brainsci14100994 - 29 Sep 2024
Cited by 1 | Viewed by 2255
Abstract
Background: In the oddball paradigm, the dorsolateral prefrontal cortex (DLPFC) is often associated with active cognitive responses, such as maintaining information in working memory or adapting response strategies. While some evidence points to the DLPFC’s role in passive auditory deviance perception, a detailed [...] Read more.
Background: In the oddball paradigm, the dorsolateral prefrontal cortex (DLPFC) is often associated with active cognitive responses, such as maintaining information in working memory or adapting response strategies. While some evidence points to the DLPFC’s role in passive auditory deviance perception, a detailed understanding of the spatiotemporal neurodynamics involved remains unclear. Methods: In this study, event-related optical signals (EROS) and event-related potentials (ERPs) were simultaneously recorded for the first time over the prefrontal cortex using a 64-channel electroencephalography (EEG) system, during passive auditory deviance perception in 12 right-handed young adults (7 women and 5 men). In this oddball paradigm, deviant stimuli (a 1500 Hz pure tone) elicited a negative shift in the N1 ERP component, related to mismatch negativity (MMN), and a significant positive deflection associated with the P300, compared to standard stimuli (a 1000 Hz tone). Results: We hypothesize that the DLPFC not only participates in active tasks but also plays a critical role in processing deviant stimuli in passive conditions, shifting from pre-attentive to attentive processing. We detected enhanced neural activity in the left middle frontal gyrus (MFG), at the same timing of the MMN component, followed by later activation at the timing of the P3a ERP component in the right MFG. Conclusions: Understanding these dynamics will provide deeper insights into the DLPFC’s role in evaluating the novelty or unexpectedness of the deviant stimulus, updating its cognitive value, and adjusting future predictions accordingly. However, the small number of subjects could limit the generalizability of the observations, in particular with respect to the effect of handedness, and additional studies with larger and more diverse samples are necessary to validate our conclusions. Full article
(This article belongs to the Section Behavioral Neuroscience)
Show Figures

Figure 1

19 pages, 10602 KB  
Article
Effects of Gradual Spatial and Temporal Cues Provided by Synchronized Walking Avatar on Elderly Gait
by Dane A. L. Miller, Hirotaka Uchitomi and Yoshihiro Miyake
Appl. Sci. 2024, 14(18), 8374; https://doi.org/10.3390/app14188374 - 18 Sep 2024
Cited by 2 | Viewed by 1300
Abstract
Aging often leads to elderly gait characterized by slower speeds, shorter strides, and increased cycle; improving gait can significantly enhance the quality of life. Early gait training can help reduce gait impairment later on. Augmented reality (AR) technologies have shown promise in gait [...] Read more.
Aging often leads to elderly gait characterized by slower speeds, shorter strides, and increased cycle; improving gait can significantly enhance the quality of life. Early gait training can help reduce gait impairment later on. Augmented reality (AR) technologies have shown promise in gait training, providing real-time feedback and guided exercises to improve walking patterns and gait parameters. The aim of this study was to observe the effects of gradual spatial and temporal cues provided by a synchronized walking avatar on the gait of elderly participants. This experiment involved 19 participants aged over 70 years, who walked while interacting with a synchronized walking avatar that provided audiovisual spatial and temporal cues. Spatial cueing and temporal cueing were provided through distance changes and phase difference changes, respectively. The WalkMate AR system was used to synchronize the avatar’s walking cycle with the participants’, delivering auditory cues matched to foot contacts. This study assessed the immediate and carry-over effects of changes in distance and phase difference on stride length, cycle time, and gait speed. The results indicate that gradual spatial and temporal cueing significantly influences elderly gait parameters, with potential applications in gait rehabilitation and training. Full article
Show Figures

Figure 1

11 pages, 1837 KB  
Article
Echoes from Sensory Entrainment in Auditory Working Memory for Pitch
by Matthew G. Wisniewski
Brain Sci. 2024, 14(8), 792; https://doi.org/10.3390/brainsci14080792 - 7 Aug 2024
Viewed by 1509
Abstract
Ongoing neural oscillations reflect cycles of excitation and inhibition in local neural populations, with individual neurons being more or less likely to fire depending upon the oscillatory phase. As a result, the oscillations could determine whether or not a sound is perceived and/or [...] Read more.
Ongoing neural oscillations reflect cycles of excitation and inhibition in local neural populations, with individual neurons being more or less likely to fire depending upon the oscillatory phase. As a result, the oscillations could determine whether or not a sound is perceived and/or whether its neural representation enters into later processing stages. While empirical support for this idea has come from sound detection studies, large gaps in knowledge still exist regarding memory for sound events. In the current study, it was investigated how sensory entrainment impacts the fidelity of working memory representations for pitch. In two separate experiments, an 8 Hz amplitude modulated (AM) entraining stimulus was presented prior to a multitone complex having an f0 between 270 and 715 Hz. This “target” sound could be presented at phases from 0 to 2π radians in relation to the previous AM. After a retention interval of 4 s (Experiment 1; n = 26) or 2 s (Experiment 2; n = 28), listeners were tasked to reproduce the target sound’s pitch by moving their finger along the horizontal axis of a response pad. It was hypothesized that if entrainment modulates auditory working memory fidelity, reproductions of a target’s pitch would be more accurate and precise when targets were presented in phase with the entrainment. Cosine fits of the average data for both experiments showed a significant entrainment “echo” in the accuracy of pitch matches. There was no apparent echo in the matching precision. Fitting of the individual data accuracy showed that the optimal phase was consistent across individuals, aligning near the next AM peak had the AM continued. The results show that sensory entrainment modulates auditory working memory in addition to stimulus detection, consistent with the proposal that ongoing neural oscillatory activity modulates higher-order auditory processes. Full article
Show Figures

Figure 1

Back to TopTop