Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (425)

Search Parameters:
Keywords = auditory perception

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 1700 KB  
Article
Sensory Processing of Time and Space in Autistic Children
by Franz Coelho, Belén Rando, Mariana Salgado and Ana Maria Abreu
Children 2025, 12(10), 1366; https://doi.org/10.3390/children12101366 - 10 Oct 2025
Viewed by 174
Abstract
Background/Objectives: Autism is characterized by atypical sensory processing, which affects spatial and temporal perception. Here, we explore sensory processing in children with autism, focusing on visuospatial and temporal tasks across visual and auditory modalities. Methods: Ninety-two children aged 4 to 6 [...] Read more.
Background/Objectives: Autism is characterized by atypical sensory processing, which affects spatial and temporal perception. Here, we explore sensory processing in children with autism, focusing on visuospatial and temporal tasks across visual and auditory modalities. Methods: Ninety-two children aged 4 to 6 participated, divided into three groups: autism (n = 32), neurotypical chronological age-matched controls (n = 28), and neurotypical developmental age-matched controls (n = 32). The autism group consisted of high-functioning children (26 boys). The participants completed computer-based tasks requiring spatial and temporal processing. Response accuracy and reaction times were recorded. Results: The autism group demonstrated higher accuracy in temporal tasks (visual and auditory modalities) and comparable accuracy in visuospatial modality, but slower response times in all tasks compared to both neurotypical controls. These results suggest a strategy that prioritizes accuracy over speed, while preserving spatial and temporal processing in autism. Conclusions: These findings suggest that temporal processing, rather than the sensory modality, drives decision-making strategies in children with autism. Our findings highlight the need for interventions aligned with autistic children’s slower but accurate processing style to support social interaction and reduce stress. In a fast-paced digitalized world, autistic children might benefit from slower, balanced, and inclusive, evidence-based approaches that align with their cognitive rhythm and reduce overstimulation. By incorporating these unique strategies, targeted programs can enhance the quality of life and adaptive skills of children with autism, thereby fostering better integration into social and sensory-rich environments. Full article
(This article belongs to the Special Issue Children with Autism Spectrum Disorder: Diagnosis and Treatment)
Show Figures

Figure 1

26 pages, 842 KB  
Article
Speech Production Intelligibility Is Associated with Speech Recognition in Adult Cochlear Implant Users
by Victoria A. Sevich, Davia J. Williams, Aaron C. Moberly and Terrin N. Tamati
Brain Sci. 2025, 15(10), 1066; https://doi.org/10.3390/brainsci15101066 - 30 Sep 2025
Viewed by 490
Abstract
Background/Objectives: Adult cochlear implant (CI) users exhibit broad variability in speech perception and production outcomes. Cochlear implantation improves the intelligibility (comprehensibility) of CI users’ speech, but the degraded auditory signal delivered by the CI may attenuate this benefit. Among other effects, degraded [...] Read more.
Background/Objectives: Adult cochlear implant (CI) users exhibit broad variability in speech perception and production outcomes. Cochlear implantation improves the intelligibility (comprehensibility) of CI users’ speech, but the degraded auditory signal delivered by the CI may attenuate this benefit. Among other effects, degraded auditory feedback can lead to compression of the acoustic–phonetic vowel space, which makes vowel productions confusable, decreasing intelligibility. Sustained exposure to degraded auditory feedback may also weaken phonological representations. The current study examined the relationship between subjective ratings and acoustic measures of speech production, speech recognition accuracy, and phonological processing (cognitive processing of speech sounds) in adult CI users. Methods: Fifteen adult CI users read aloud a series of short words, which were analyzed in two ways. First, acoustic measures of vowel distinctiveness (i.e., vowel dispersion) were calculated. Second, thirty-seven normal-hearing (NH) participants listened to the words produced by the CI users and rated the subjective intelligibility of each word from 1 (least understandable) to 100 (most understandable). CI users also completed an auditory sentence recognition task and a nonauditory cognitive test of phonological processing. Results: CI users rated as having more understandable speech demonstrated more accurate sentence recognition than those rated as having less understandable speech, but intelligibility ratings were only marginally related to phonological processing. Further, vowel distinctiveness was marginally associated with sentence recognition but not related to phonological processing or subjective ratings of intelligibility. Conclusions: The results suggest that speech intelligibility ratings are related to speech recognition accuracy in adult CI users, and future investigation is needed to identify the extent to which this relationship is mediated by individual differences in phonological processing. Full article
(This article belongs to the Special Issue Language, Communication and the Brain—2nd Edition)
Show Figures

Figure 1

14 pages, 538 KB  
Article
The MuRQoL-He—Hebrew Adaptation of the Music Related Quality of Life Questionnaire Among Adults Who Are Deaf and Hard of Hearing
by Zahi Tubul, Zvi Tubul-Lavy and Gila Tubul-Lavy
Audiol. Res. 2025, 15(5), 127; https://doi.org/10.3390/audiolres15050127 - 28 Sep 2025
Viewed by 265
Abstract
Purpose: The present study aimed to describe the adaptation and validation process of the MuRQoL (Music Related Quality of Life questionnaire) from English to Hebrew and to describe normative data from a cohort of adults with normal hearing versus those with hearing aids [...] Read more.
Purpose: The present study aimed to describe the adaptation and validation process of the MuRQoL (Music Related Quality of Life questionnaire) from English to Hebrew and to describe normative data from a cohort of adults with normal hearing versus those with hearing aids or cochlear implants. Methods: After thoroughly translating and adapting to Hebrew, the participants completed the questionnaire online. We calculated the Cronbach’s alpha and McDonald’s omega scores for all scales and subscales. The construct validity of the questionnaire was evaluated using Confirmatory Factor Analysis (CFA) and the “known group” method. A total of 310 adults participated in this study. Fifty-four participants were deaf or hard of hearing, and 256 had normal hearing. Results: Internal consistency of the MuRQoL-He scales and subscales demonstrated good-to-excellent reliability. The goodness-of-fit indices for the frequency and importance scales were within acceptable standards. We found a significant difference in the frequency scale, where the normal-hearing group scores were significantly higher than those of the deaf and hard-of-hearing groups. Conclusions: The validity and reliability of the MuRQoL-He have been confirmed, indicating that it is suitable for guiding music rehabilitation for Hebrew-speaking deaf and hard-of-hearing adults. Full article
Show Figures

Figure 1

22 pages, 2431 KB  
Article
Perceptual Plasticity in Bilinguals: Language Dominance Reshapes Acoustic Cue Weightings
by Annie Tremblay and Hyoju Kim
Brain Sci. 2025, 15(10), 1053; https://doi.org/10.3390/brainsci15101053 - 27 Sep 2025
Viewed by 398
Abstract
Background/Objectives: Speech perception is shaped by language experience, with listeners learning to selectively attend to acoustic cues that are informative in their language. This study investigates how language dominance, a proxy for long-term language experience, modulates cue weighting in highly proficient Spanish–English bilinguals’ [...] Read more.
Background/Objectives: Speech perception is shaped by language experience, with listeners learning to selectively attend to acoustic cues that are informative in their language. This study investigates how language dominance, a proxy for long-term language experience, modulates cue weighting in highly proficient Spanish–English bilinguals’ perception of English lexical stress. Methods: We tested 39 bilinguals with varying dominance profiles and 40 monolingual English speakers in a stress identification task using auditory stimuli that independently manipulated vowel quality, pitch, and duration. Results: Bayesian logistic regression models revealed that, compared to monolinguals, bilinguals relied less on vowel quality and more on pitch and duration, mirroring cue distributions in Spanish versus English. Critically, cue weighting within the bilingual group varied systematically with language dominance: English-dominant bilinguals patterned more like monolingual English listeners, showing increased reliance on vowel quality and decreased reliance on pitch and duration, whereas Spanish-dominant bilinguals retained a cue weighting that was more Spanish-like. Conclusions: These results support experience-based models of speech perception and provide behavioral evidence that bilinguals’ perceptual attention to acoustic cues remains flexible and dynamically responsive to long-term input. These results are in line with a neurobiological account of speech perception in which attentional and representational mechanisms adapt to changes in the input. Full article
(This article belongs to the Special Issue Language Perception and Processing)
Show Figures

Figure 1

18 pages, 615 KB  
Article
Auditory Processing and Speech Sound Disorders: Behavioral and Electrophysiological Findings
by Konstantinos Drosos, Paris Vogazianos, Dionysios Tafiadis, Louiza Voniati, Alexandra Papanicolaou, Klea Panayidou and Chryssoula Thodi
Audiol. Res. 2025, 15(5), 119; https://doi.org/10.3390/audiolres15050119 - 19 Sep 2025
Viewed by 388
Abstract
Background: Children diagnosed with Speech Sound Disorders (SSDs) encounter difficulties in speech perception, especially when listening in the presence of background noise. Recommended protocols for auditory processing evaluation include behavioral linguistic and speech processing tests, as well as objective electrophysiological measures. The present [...] Read more.
Background: Children diagnosed with Speech Sound Disorders (SSDs) encounter difficulties in speech perception, especially when listening in the presence of background noise. Recommended protocols for auditory processing evaluation include behavioral linguistic and speech processing tests, as well as objective electrophysiological measures. The present study compared the auditory processing profiles of children with SSD and typically developing (TD) children using a battery of behavioral language and auditory tests combined with auditory evoked responses. Methods: Forty (40) parents of 7–10 years old Greek Cypriot children completed parent questionnaires related to their children’s listening; their children completed an assessment comprising language, phonology, auditory processing, and auditory evoked responses. The experimental group included 24 children with a history of SSDs; the control group consisted of 16 TD children. Results: Three factors significantly differentiated SSD from TD children: Factor 1 (auditory processing screening), Factor 5 (phonological awareness), and Factor 13 (Auditory Brainstem Response—ABR wave V latency). Among these, Factor 1 consistently predicted SSD classification both independently and in combined models, indicating strong ecological and diagnostic relevance. This predictive power suggests real-world listening behaviors are central to SSD differentiation. The significant correlation between Factor 5 and Factor 13 may suggest an interaction between auditory processing at the brainstem level and higher-order phonological manipulation. Conclusions: This research underscores the diagnostic significance of integrating behavioral and physiological metrics through dimensional and predictive methodologies. Factor 1, which focuses on authentic listening environments, was identified as the strongest predictor. These results advocate for the inclusion of ecologically valid listening items in the screening for APD. Poor discrimination of speech in noise imposes discrepancies between incoming auditory information and retained phonological representations, which disrupts the implicit processing mechanisms that align auditory input with phonological representations stored in memory. Speech and language pathologists can incorporate pertinent auditory processing assessment findings to identify potential language-processing challenges and formulate more effective therapeutic intervention strategies. Full article
(This article belongs to the Section Speech and Language)
Show Figures

Figure 1

16 pages, 4069 KB  
Article
Exploring Consumer Perception of Augmented Reality (AR) Tools for Displaying and Understanding Nutrition Labels: A Pilot Study
by Cristina Botinestean, Stergios Melios and Emily Crofton
Multimodal Technol. Interact. 2025, 9(9), 97; https://doi.org/10.3390/mti9090097 - 16 Sep 2025
Viewed by 436
Abstract
Augmented reality (AR) technology offers a promising approach to providing consumers with detailed and personalized information about food products. The aim of this pilot study was to explore how the use of AR tools comprising visual and auditory formats affects consumers’ perception and [...] Read more.
Augmented reality (AR) technology offers a promising approach to providing consumers with detailed and personalized information about food products. The aim of this pilot study was to explore how the use of AR tools comprising visual and auditory formats affects consumers’ perception and understanding of nutrition labels of two commercially available products (lasagne ready meal and strawberry yogurt). The nutritional information of both the lasagne and yogurt product were presented to consumers (n = 30) under three experimental conditions: original packaging, visual AR, and visual and audio AR. Consumers answered questions about their perceptions of the products’ overall healthiness, caloric content, and macronutrient composition, as well as how the information was presented. The results showed that while nutritional information presented under the original packaging condition was more effective in changing consumer perceptions, the AR tools were found to be more “novel” and “memorable”. More specifically, for both lasagne and yogurt, the visual AR tool resulted in a more memorable experience compared to original packaging. The use of visual AR and visual and audio AR tools were considered novel experiences for both products. However, the provision of nutritional information had a greater impact on product perception than the specific experimental condition used to present it. These results provide evidence from a pilot study supporting the development of an AR tool for displaying and potentially improving the understanding of nutrition labels. Full article
Show Figures

Figure 1

35 pages, 1744 KB  
Review
Personalizing Cochlear Implant Care in Single-Sided Deafness: A Distinct Paradigm from Bilateral Hearing Loss
by Emmeline Y. Lin, Stephanie M. Younan, Karen C. Barrett and Nicole T. Jiam
J. Pers. Med. 2025, 15(9), 439; https://doi.org/10.3390/jpm15090439 - 15 Sep 2025
Viewed by 1249
Abstract
Background: Cochlear implants (CIs) serve diverse populations with hearing loss, but patients with single-sided deafness (SSD) often show lower post-implantation usage and satisfaction than bilateral CI users. This disparity may stem from their normal contralateral ear providing sufficient auditory input for many daily [...] Read more.
Background: Cochlear implants (CIs) serve diverse populations with hearing loss, but patients with single-sided deafness (SSD) often show lower post-implantation usage and satisfaction than bilateral CI users. This disparity may stem from their normal contralateral ear providing sufficient auditory input for many daily situations, reducing the perceived need for consistent CI use. Consequently, uniform screening and evaluations, typically designed for bilateral hearing loss, often fail to address SSD’s unique needs. Methods: This narrative review synthesizes the current literature to explore patient and device factors shaping CI integration, outcomes, and experience in SSD. It highlights implications for developing personalized care strategies distinct from those used in bilateral hearing loss. Results: SSD patients face unique challenges: reliance on compensatory behaviors and significant auditory processing difficulties like acoustic–electric mismatch and place–pitch discrepancy. Anatomical factors and deafness of duration also impact outcomes. Traditional measures are often insufficient due to ceiling effects. Music perception offers a sensitive metric and rehabilitation tool, while big data and machine learning show promise for predicting outcomes and tailoring interventions. Conclusions: Optimizing CI care for SSD necessitates a personalized approach across candidacy, counseling, and rehabilitation. Tailored strategies, including individualized frequency mapping, adaptive auditory training, advanced outcome metrics like music perception, and leveraging big data for precise, data-driven predictions, are crucial for improving consistent CI usage and overall patient satisfaction. Full article
(This article belongs to the Special Issue Otolaryngology: Big Data Application in Personalized Medicine)
Show Figures

Figure 1

16 pages, 3326 KB  
Article
Vibrotactile Perception of Consonant and Dissonant Musical Intervals
by Alvaro Garcia Lopez, Jose Luis Lopez-Cuadrado, Israel Gonzalez-Carrasco, Maria Natividad Carrero de las Peñas, Maria Jose Lucia Mulas and Belen Ruiz Mezcua
Appl. Sci. 2025, 15(18), 9873; https://doi.org/10.3390/app15189873 - 9 Sep 2025
Viewed by 518
Abstract
In recent years, with the development of haptic technologies, the investigation of the potential of vibrotactile perception of musical parameters has attracted much interest. The possibility of vibrotactile musical note discrimination has already been studied. In this study, we approach the problem of [...] Read more.
In recent years, with the development of haptic technologies, the investigation of the potential of vibrotactile perception of musical parameters has attracted much interest. The possibility of vibrotactile musical note discrimination has already been studied. In this study, we approach the problem of vibrotactile perception of musical consonant and dissonant tone relationships, essential components of Western tonal music. Thirty participants were asked to distinguish between consonant and dissonant intervals presented in two different conditions: the Auditory Condition and the Vibrotactile Condition (through tactile stimulation). The stimuli were occidental tonal piano music intervals considered from the point of view of a musical theory perfect consonant or dissonant interval. The results show that consonant and dissonant musical intervals can be perceived at the tactile level and that there is no significant difference in the number of intervals correctly recognised in the Vibrotactile Condition and the Auditory Condition in participants who have no musical training. The consonance/dissonance perception shows some differences in both conditions, with vibrotactile perception being more accurate with larger intervals of more than ten semitones. In the Auditory Condition, it is related to the number of semitones, becoming more sensitive from eleven semitones onwards, and the type of interval, possibly due to the influence of auditory musical training. These results open up the possibility of transmitting other tonal musical characteristics; through tactile stimulation the possibility of transmitting the melodic and harmonic basis of Western music vibrotactically opens up, offering a wide range of options for investigation. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

23 pages, 2762 KB  
Article
Relationships Between Self-Report Hearing Scales, Listening Effort, and Speech Perception in Cocktail Party Noise in Hearing-Aided Patients
by Annie Moulin, Pierre-Emmanuel Aguera and Mathieu Ferschneider
Audiol. Res. 2025, 15(5), 113; https://doi.org/10.3390/audiolres15050113 - 8 Sep 2025
Viewed by 379
Abstract
Background/Objectives: Potential correlations between the scores of self-report questionnaires and speech perception in noise abilities vary widely among studies and have been little explored in patients with conventional hearing aids (HAs). This study aimed to analyse the interrelations between (1) self-report auditory scales [...] Read more.
Background/Objectives: Potential correlations between the scores of self-report questionnaires and speech perception in noise abilities vary widely among studies and have been little explored in patients with conventional hearing aids (HAs). This study aimed to analyse the interrelations between (1) self-report auditory scales (the 15-item short-form of the Speech Spatial and Qualities of Hearing Scale (15iSSQ) and the Extended Listening Effort Assessment Scale (EEAS); (2) speech perception in cocktail party noise, measured with and without HAs; and (3) a self-assessment of the listening effort perceived during the speech in a noise-perception task (TLE) in hearing-aid wearers. Material and Methods: –Thirty-two patients, aged of 77.5 years (SD = 12) with a mean HA experience of 5.6 years, completed the 15iSSQ and EEAS. Their speech-in-babble-noise perception thresholds (SPIN) were assessed with (HA_SPIN) and without their HAs (UA_SPIN), using a four-alternative forced-choice test in free field, with several fixed Signal to Noise ratios (SNR). They were asked to self-assess their listening effort at each of those SNRs, allowing us to define a task-related listening-effort threshold with (HA_TLE) and without HAs (UA_TLE), i.e., the SNR for which they self-evaluated their listening effort as 5 out of 10. Results: 15iSSQ decreased as both HA_SPIN (r = −0.47, p < 0.01) and HA_TLE increased (r = −0.36, p < 0.05). The relationship between 15iSSQSpeech and UA_SPIN (and UA_TLE) showed a strong moderating influence by HA experience and HA daily wear (HADW), explaining up to 31% of the variance. 15iSSQQuality depended on HA SPIN and HA_TLE (r = −0.50, p < 0.01), and the relationship between 15iSSQQuality and UA_TLE was moderated by HADW. EEAS scores depended on both HA experience and UA_SPIN, with a strong moderating influence by HADW. Conclusions: Relationships between auditory questionnaires and SPIN are strongly moderated by both HA experience and HADW, even in experienced HA users, showing the need to account for these variables when analysing relationships between questionnaires and hearing-in-noise tests in experienced HA wearers. Full article
(This article belongs to the Section Hearing)
Show Figures

Figure 1

17 pages, 723 KB  
Article
The Transfer of In-Game Behaviors and Emotions to Real-World Experiences in Game World
by Zhuoyue Diao, Pu Meng, Xin Meng and Liqun Zhang
Behav. Sci. 2025, 15(9), 1203; https://doi.org/10.3390/bs15091203 - 4 Sep 2025
Viewed by 727
Abstract
This study investigates the complex interaction between in-game behaviors, post-game emotional expressions, and Game Transfer Phenomena (GTP) among MOBA players. A multidimensional framework is adopted to examine how gaming experiences shape real-world cognition, perception, and behavior through the integration of objective behavioral metrics [...] Read more.
This study investigates the complex interaction between in-game behaviors, post-game emotional expressions, and Game Transfer Phenomena (GTP) among MOBA players. A multidimensional framework is adopted to examine how gaming experiences shape real-world cognition, perception, and behavior through the integration of objective behavioral metrics and affective computing-based emotion recognition. The results indicate that in-game Deaths are negatively associated with altered sensory perceptions—specifically Altered Visual and Auditory Perceptions (AVP and AAP)—suggesting that performance failures may disrupt immersive engagement. In contrast, Assists, as indicators of team-based collaboration, are positively associated with Automatic Mental Processes (AMP), highlighting the cognitive impact of cooperative gameplay. Although no significant mediating effects were observed, emotional dimensions, such as Social Discomfort and Cognitive Focus, offered additional insights into the transfer between in-game and post-game experiences. These findings bridge the gap between virtual and real-world experiences, offering theoretical advancements in GTP research and practical implications for game design, emotional regulation, and psychological interventions. Full article
(This article belongs to the Section Social Psychology)
Show Figures

Figure 1

23 pages, 1660 KB  
Article
Soundtalking: Extending Soundscape Practice Through Long-Term Participant-Led Sound Activities in the Dee Estuary
by Neil Spencer Bruce
Sustainability 2025, 17(17), 7904; https://doi.org/10.3390/su17177904 - 2 Sep 2025
Viewed by 643
Abstract
This study explores the practice of “soundtalking”, a novel method of participant-led sound practice, across the Dee Estuary in the UK. Over the course of twelve months, the Our Dee Estuary Project facilitated monthly meetings where participants engaged in sound workshops, in-depth discussions, [...] Read more.
This study explores the practice of “soundtalking”, a novel method of participant-led sound practice, across the Dee Estuary in the UK. Over the course of twelve months, the Our Dee Estuary Project facilitated monthly meetings where participants engaged in sound workshops, in-depth discussions, and sound-making activities, with the aim of fostering a deeper connection with both their local and sonic environments. This longitudinal practice-based research study created an environment of sonic learning and listening development, documenting how participants’ interactions and narratives both shape and are shaped by the estuarial environment, its soundscape, and their sense of place. Participant-led conversations formed the basis of the methodology, providing rich qualitative data on how individuals perceive, interpret, and interact with their surroundings and the impact that the soundscape has on the individual. The regular and unstructured discussions revealed the intrinsic value of soundscapes in participants’ lives, emphasising themes of memory, reflection, place attachment, environmental awareness, and well-being. The collaborative nature of the project allowed for the co-creation of a film and a radio soundscape, both of which serve as significant outputs, encapsulating the auditory and emotional essence of the estuary. The study’s initial findings indicate that “soundtalking” as a practice not only enhances participants’ auditory perception but also fosters a sense of community and belonging. The regularity of monthly meetings facilitated the development of a shared acoustic vocabulary and experience among participants, which in turn enriched their collective and individual experiences of the estuary. Soundtalking is proposed as an additional tool in the study of soundscapes to complement and extend more commonly implemented methods, such as soundwalking and soundsitting. Soundtalking demonstrates the efficacy of longitudinal, participant-led approaches in capturing the dynamic and lived experiences of soundscapes and their associated environments, over methods that only create fleeting short-term engagements with the soundscape. In conclusion, the Our Dee Estuary Project demonstrates the transformative potential of soundtalking in deepening our understanding of human–environment interactions and, in addition, has shown that there are both health and well-being aspects that arise from the practice. Beyond this, the project has output a film and a radio sound piece, which not only document but also celebrate the intricate and evolving relationship between the participants and the estuarine soundscape, offering valuable insights for future soundscape research and community engagement initiatives. Full article
(This article belongs to the Special Issue Urban Noise Control, Public Health and Sustainable Cities)
Show Figures

Figure 1

18 pages, 3373 KB  
Article
Framework for Classification of Fattening Pig Vocalizations in a Conventional Farm with High Relevance for Practical Application
by Thies J. Nicolaisen, Katharina E. Bollmann, Isabel Hennig-Pauka and Sarah C. L. Fischer
Animals 2025, 15(17), 2572; https://doi.org/10.3390/ani15172572 - 1 Sep 2025
Viewed by 759
Abstract
The vocal repertoire of the domestic pig (Sus scrofa domesticus) was examined in this study under conventional housing conditions. Therefore, direct behavior-associated vocalizations of fattening pigs were recorded and assigned to behavioral categories. Subsequently, a mathematical analysis of the recorded vocalizations [...] Read more.
The vocal repertoire of the domestic pig (Sus scrofa domesticus) was examined in this study under conventional housing conditions. Therefore, direct behavior-associated vocalizations of fattening pigs were recorded and assigned to behavioral categories. Subsequently, a mathematical analysis of the recorded vocalizations was conducted using the frequency-based parameters of 25%, 50% and 75% quantiles of the frequency spectrum and the time-based parameters of variance of the time signal, mean level of the individual amplitude modulation and cumulative amplitude modulation. Most vocalizations were positively/neutrally assessed vocalizations constituting 59.7%, of which grunting was by far the most frequent vocalization. Negatively assessed vocalizations accounted for 37.8% of all vocalizations. Data analysis based on the six parameters resulted in a distinguishability of vocalizations related to negatively valenced behavior from those related to positively/neutrally valenced behavior. The study illustrates the relationship between auditory sensory perception and the underlying mathematical signals. It shows how pig vocalizations assessed by observations, for example, as positive or negative, are distinguishable using mathematical parameters but also which ambiguities arise when objective mathematical features widely overlap. In this way, the study encourages the use of more complex algorithms in the future to solve this challenging, multidimensional problem, forming the basis for future automatic detection of negative pig vocalizations. Full article
(This article belongs to the Special Issue Animal Health and Welfare Assessment of Pigs)
Show Figures

Figure 1

20 pages, 627 KB  
Article
Silent Bells and Howling Muslims: Auditory History and Christian–Muslim Relations in Felix Fabri’s Evagatorium
by Julia Samp
Religions 2025, 16(9), 1134; https://doi.org/10.3390/rel16091134 - 30 Aug 2025
Viewed by 789
Abstract
Contacts and conflicts between Christians and Muslims in the Mediterranean region in the context of late medieval pilgrimage to Jerusalem and their depiction in the pilgrimage reports have already been extensively analysed from the perspective of medieval studies. Although it is a fact [...] Read more.
Contacts and conflicts between Christians and Muslims in the Mediterranean region in the context of late medieval pilgrimage to Jerusalem and their depiction in the pilgrimage reports have already been extensively analysed from the perspective of medieval studies. Although it is a fact that the relation with “the other” is based on sensory perception, little attention has been paid to the senses, especially to the significance of the auditory dimension of the reception of Christian–Muslim relations during the pilgrimage to Jerusalem and of their depiction in the pilgrimage reports. Using the example of probably the best-known pilgrimage report of the late Middel Ages, the Evagatorium by Felix Fabri (1437/8–1502), the essay shows—firstly—that the monk from Ulm added a veritable “soundtrack” to his work. Secondly, the essay emphasises the methodological challenges of such an approach, because every form of pre-modern sound has faded or rather the pre-modern sound has—apart from sound artefacts—only survived in media that are actually silent. Nevertheless, the essay points out the potential of an auditory reading of Christian–Muslim relations in the Mediterranean region that allows conclusions to be drawn about the establishment, development, and the disruption of relations between Christians and Muslims. Full article
51 pages, 15030 KB  
Review
A Review on Sound Source Localization in Robotics: Focusing on Deep Learning Methods
by Reza Jalayer, Masoud Jalayer and Amirali Baniasadi
Appl. Sci. 2025, 15(17), 9354; https://doi.org/10.3390/app15179354 - 26 Aug 2025
Cited by 1 | Viewed by 1606
Abstract
Sound source localization (SSL) adds a spatial dimension to auditory perception, allowing a system to pinpoint the origin of speech, machinery noise, warning tones, or other acoustic events, capabilities that facilitate robot navigation, human–machine dialogue, and condition monitoring. While existing surveys provide valuable [...] Read more.
Sound source localization (SSL) adds a spatial dimension to auditory perception, allowing a system to pinpoint the origin of speech, machinery noise, warning tones, or other acoustic events, capabilities that facilitate robot navigation, human–machine dialogue, and condition monitoring. While existing surveys provide valuable historical context, they typically address general audio applications and do not fully account for robotic constraints or the latest advancements in deep learning. This review addresses these gaps by offering a robotics-focused synthesis, emphasizing recent progress in deep learning methodologies. We start by reviewing classical methods such as time difference of arrival (TDOA), beamforming, steered-response power (SRP), and subspace analysis. Subsequently, we delve into modern machine learning (ML) and deep learning (DL) approaches, discussing traditional ML and neural networks (NNs), convolutional neural networks (CNNs), convolutional recurrent neural networks (CRNNs), and emerging attention-based architectures. The data and training strategy that are the two cornerstones of DL-based SSL are explored. Studies are further categorized by robot types and application domains to facilitate researchers in identifying relevant work for their specific contexts. Finally, we highlight the current challenges in SSL works in general, regarding environmental robustness, sound source multiplicity, and specific implementation constraints in robotics, as well as data and learning strategies in DL-based SSL. Also, we sketch promising directions to offer an actionable roadmap toward robust, adaptable, efficient, and explainable DL-based SSL for next-generation robots. Full article
Show Figures

Figure 1

20 pages, 2568 KB  
Article
Towards Spatial Awareness: Real-Time Sensory Augmentation with Smart Glasses for Visually Impaired Individuals
by Nadia Aloui
Electronics 2025, 14(17), 3365; https://doi.org/10.3390/electronics14173365 - 25 Aug 2025
Viewed by 964
Abstract
This research presents an innovative Internet of Things (IoT) and artificial intelligence (AI) platform designed to provide holistic assistance and foster autonomy for visually impaired individuals within the university environment. Its main novelty is real-time sensory augmentation and spatial awareness, integrating ultrasonic, LiDAR, [...] Read more.
This research presents an innovative Internet of Things (IoT) and artificial intelligence (AI) platform designed to provide holistic assistance and foster autonomy for visually impaired individuals within the university environment. Its main novelty is real-time sensory augmentation and spatial awareness, integrating ultrasonic, LiDAR, and RFID sensors for robust 360° obstacle detection, environmental perception, and precise indoor localization. A novel, optimized Dijkstra algorithm calculates optimal routes; speech and intent recognition enable intuitive voice control. The wearable smart glasses are complemented by a platform providing essential educational functionalities, including lesson reminders, timetables, and emergency assistance. Based on gamified principles of exploration and challenge, the platform includes immersive technology settings, intelligent image recognition, auditory conversion, haptic feedback, and rapid contextual awareness, delivering a sophisticated, effective navigational experience. Exhaustive technical evaluation reveals that a more autonomous and fulfilling university experience is made possible by notable improvements in navigation performance, object detection accuracy, and technical capabilities for social interaction features, according to a thorough technical audit. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Back to TopTop