Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = vocal biomarker

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3290 KB  
Article
MiRNA Profiling in Premalignant Lesions and Early Glottic Cancer
by Anna Rzepakowska, Agnieszka Zajkowska, Marta Mękarska, Julia Śladowska, Aleksandra Borowy and Maciej Małecki
Cancers 2025, 17(17), 2883; https://doi.org/10.3390/cancers17172883 - 2 Sep 2025
Viewed by 288
Abstract
Background: miRNA profiling across different stages of laryngeal carcinogenesis explores dysregulated molecules relevant to engaged gene pathways and identifies markers for differential diagnosis and prognosis in early mucosal lesions of the larynx. Methods: Tissue samples were prospectively collected from 28 patients [...] Read more.
Background: miRNA profiling across different stages of laryngeal carcinogenesis explores dysregulated molecules relevant to engaged gene pathways and identifies markers for differential diagnosis and prognosis in early mucosal lesions of the larynx. Methods: Tissue samples were prospectively collected from 28 patients with hypertrophic vocal fold lesions: no dysplasia (ND), low-grade dysplasia (LGD), high-grade dysplasia (HGD), and invasive cancer (IC), as well as from 3 patients with vocal fold polyps. miRNA profiling of the samples was performed using microfluidic cards—TaqMan® Human MicroRNA Array A. A comparative analysis of ΔCt (dCt) miRNA expression levels was conducted between groups. Results: hsa-miR-216a-5p and hsa-miR-488-3p were selectively expressed in control tissues, while hsa-miR-105-5p and hsa-miR-516a-5p were exclusively detected in HGD and IC samples. Significant differences in miRNA expression were identified across 4, 16, 17, and 38 miRNA types between control and ND, LGD, HGD, and IC groups, respectively. hsa-miR-185-5p and hsa-miR-21-5p showed significantly altered expression between ND and LGD, HGD, and IC (p = 0.026, 0.001, 0.002; and p = 0.021, 0.002, 0.001, respectively). Twenty-five miRNAs were differentially expressed between LGD and both HGD and IC, while eleven miRNAs distinguished HGD from IC. Notably, hsa-miR-503-5p expression decreased progressively with increasing histological severity. Conclusions: Distinct miRNA expression profiles are associated with progressive stages of laryngeal mucosal lesions. Specific miRNAs may serve as valuable biomarkers for early detection, risk stratification, and prognosis in vocal fold carcinogenesis. Full article
Show Figures

Figure 1

16 pages, 317 KB  
Perspective
Listening to the Mind: Integrating Vocal Biomarkers into Digital Health
by Irene Rodrigo and Jon Andoni Duñabeitia
Brain Sci. 2025, 15(7), 762; https://doi.org/10.3390/brainsci15070762 - 18 Jul 2025
Viewed by 1125
Abstract
The human voice is an invaluable tool for communication, carrying information about a speaker’s emotional state and cognitive health. Recent research highlights the potential of acoustic biomarkers to detect early signs of mental health and neurodegenerative conditions. Despite their promise, vocal biomarkers remain [...] Read more.
The human voice is an invaluable tool for communication, carrying information about a speaker’s emotional state and cognitive health. Recent research highlights the potential of acoustic biomarkers to detect early signs of mental health and neurodegenerative conditions. Despite their promise, vocal biomarkers remain underutilized in clinical settings, with limited standardized protocols for assessment. This Perspective article argues for the integration of acoustic biomarkers into digital health solutions to improve the detection and monitoring of cognitive impairment and emotional disturbances. Advances in speech analysis and machine learning have demonstrated the feasibility of using voice features such as pitch, jitter, shimmer, and speech rate to assess these conditions. Moreover, we propose that singing, particularly simple melodic structures, could be an effective and accessible means of gathering vocal biomarkers, offering additional insights into cognitive and emotional states. Given its potential to engage multiple neural networks, singing could function as an assessment tool and an intervention strategy for individuals with cognitive decline. We highlight the necessity of further research to establish robust, reproducible methodologies for analyzing vocal biomarkers and standardizing voice-based diagnostic approaches. By integrating vocal analysis into routine health assessments, clinicians and researchers could significantly advance early detection and personalized interventions for cognitive and emotional disorders. Full article
(This article belongs to the Topic Language: From Hearing to Speech and Writing)
13 pages, 940 KB  
Review
Management of Dysarthria in Amyotrophic Lateral Sclerosis
by Elena Pasqualucci, Diletta Angeletti, Pamela Rosso, Elena Fico, Federica Zoccali, Paola Tirassa, Armando De Virgilio, Marco de Vincentiis and Cinzia Severini
Cells 2025, 14(14), 1048; https://doi.org/10.3390/cells14141048 - 9 Jul 2025
Viewed by 1130
Abstract
Amyotrophic lateral sclerosis (ALS) stands as the leading neurodegenerative disorder affecting the motor system. One of the hallmarks of ALS, especially its bulbar form, is dysarthria, which significantly impairs the quality of life of ALS patients. This review provides a comprehensive overview of [...] Read more.
Amyotrophic lateral sclerosis (ALS) stands as the leading neurodegenerative disorder affecting the motor system. One of the hallmarks of ALS, especially its bulbar form, is dysarthria, which significantly impairs the quality of life of ALS patients. This review provides a comprehensive overview of the current knowledge on the clinical manifestations, diagnostic differentiation, underlying mechanisms, diagnostic tools, and therapeutic strategies for the treatment of dysarthria in ALS. We update on the most promising digital speech biomarkers of ALS that are critical for early and differential diagnosis. Advances in artificial intelligence and digital speech processing have transformed the analysis of speech patterns, and offer the opportunity to start therapy early to improve vocal function, as speech rate appears to decline significantly before the diagnosis of ALS is confirmed. In addition, we discuss the impact of interventions that can improve vocal function and quality of life for patients, such as compensatory speech techniques, surgical options, improving lung function and respiratory muscle strength, and percutaneous dilated tracheostomy, possibly with adjunctive therapies to treat respiratory insufficiency, and finally assistive devices for alternative communication. Full article
(This article belongs to the Special Issue Pathology and Treatments of Amyotrophic Lateral Sclerosis (ALS))
Show Figures

Figure 1

22 pages, 1595 KB  
Review
Machine Learning Applications for Diagnosing Parkinson’s Disease via Speech, Language, and Voice Changes: A Systematic Review
by Mohammad Amran Hossain, Enea Traini and Francesco Amenta
Inventions 2025, 10(4), 48; https://doi.org/10.3390/inventions10040048 - 27 Jun 2025
Viewed by 1655
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder leading to movement impairment, cognitive decline, and psychiatric symptoms. Key manifestations of PD include bradykinesia (the slowness of movement), changes in voice or speech, and gait disturbances. The quantification of neurological disorders through voice analysis [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder leading to movement impairment, cognitive decline, and psychiatric symptoms. Key manifestations of PD include bradykinesia (the slowness of movement), changes in voice or speech, and gait disturbances. The quantification of neurological disorders through voice analysis has emerged as a rapidly expanding research domain, offering the potential for non-invasive and large-scale monitoring. This review explores existing research on the application of machine learning (ML) in speech, voice, and language processing for the diagnosis of PD. It comprehensively analyzes current methodologies, highlights key findings and their associated limitations, and proposes strategies to address existing challenges. A systematic review was conducted following PRISMA guidelines. We searched four databases: PubMed, Web of Science, Scopus, and IEEE Xplore. The primary focus was on the diagnosis, detection, or identification of PD through voice, speech, and language characteristics. We included 34 studies that used ML techniques to detect or classify PD based on vocal features. The most used approaches involved free speech and reading-speech tasks. In addition to widely used feature extraction toolkits, several studies implemented custom-built feature sets. Although nearly all studies reported high classification performance, significant limitations were identified, including challenges in comparability and incomplete integration with clinical applications. Emerging trends in this field include the collection of real-world, everyday speech data to facilitate longitudinal tracking and capture participants’ natural behaviors. Another promising direction involves the incorporation of additional modalities alongside voice analysis, which may enhance both analytical performance and clinical applicability. Further research is required to determine optimal methodologies for leveraging speech and voice changes as early biomarkers of PD, thereby enhancing early detection and informing clinical intervention strategies. Full article
Show Figures

Figure 1

24 pages, 3235 KB  
Article
Alzheimer’s Disease Detection from Speech Using Shapley Additive Explanations for Feature Selection and Enhanced Interpretability
by Irati Oiza-Zapata and Ascensión Gallardo-Antolín
Electronics 2025, 14(11), 2248; https://doi.org/10.3390/electronics14112248 - 31 May 2025
Viewed by 799
Abstract
Smart cities provide an ideal framework for the integration of advanced healthcare applications, such as early Alzheimer’s Disease (AD) detection that is essential to facilitate timely interventions and slow its progression. In this context, speech analysis, combined with Artificial Intelligence (AI) techniques, has [...] Read more.
Smart cities provide an ideal framework for the integration of advanced healthcare applications, such as early Alzheimer’s Disease (AD) detection that is essential to facilitate timely interventions and slow its progression. In this context, speech analysis, combined with Artificial Intelligence (AI) techniques, has emerged as a promising approach for the automatic detection of AD, as vocal biomarkers can provide valuable indicators of cognitive decline. The proposed approach focuses on two key goals: minimizing computational overhead while maintaining high accuracy, and improving model interpretability for clinical usability. To enhance efficiency, the framework incorporates a data quality method that removes unreliable speech segments based on duration thresholds and applies Shapley Additive Explanations (SHAP) to select the most influential acoustic features. SHAP is also used to improve interpretability by providing global and local explanations of model decisions. The final model, that is based on Extreme Gradient Boosting (XGBoost), achieves an F1-Score of 0.7692 on the ADReSS dataset, showing good performance and a satisfactory level of clinical utility. This work highlights the potential of explainable AI to bridge machine learning techniques with clinically meaningful insights in the domain of AD detection from speech. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

30 pages, 1745 KB  
Review
The Human Voice as a Digital Health Solution Leveraging Artificial Intelligence
by Pratyusha Muddaloor, Bhavana Baraskar, Hriday Shah, Keerthy Gopalakrishnan, Divyanshi Sood, Prem C. Pasupuleti, Akshay Singh, Dipankar Mitra, Sumedh S. Hoskote, Vivek N. Iyer, Scott A. Helgeson and Shivaram P. Arunachalam
Sensors 2025, 25(11), 3424; https://doi.org/10.3390/s25113424 - 29 May 2025
Cited by 1 | Viewed by 2873
Abstract
The human voice is an important medium of communication and expression of feelings or thoughts. Disruption in the regulatory systems of the human voice can be analyzed and used as a diagnostic tool, labeling voice as a potential “biomarker”. Conversational artificial intelligence is [...] Read more.
The human voice is an important medium of communication and expression of feelings or thoughts. Disruption in the regulatory systems of the human voice can be analyzed and used as a diagnostic tool, labeling voice as a potential “biomarker”. Conversational artificial intelligence is at the core of voice-powered technologies, enabling intelligent interactions between machines. Due to its richness and availability, voice can be leveraged for predictive analytics and enhanced healthcare insights. Utilizing this idea, we reviewed artificial intelligence (AI) models that have executed vocal analysis and their outcomes. Recordings undergo extraction of useful vocal features to be analyzed by neural networks and machine learning models. Studies reveal machine learning models to be superior to spectral analysis in dynamically combining the huge amount of data of vocal features. Clinical applications of a vocal biomarker exist in neurological diseases such as Parkinson’s, Alzheimer’s, psychological disorders, DM, CHF, CAD, aspiration, GERD, and pulmonary diseases, including COVID-19. The primary ethical challenge when incorporating voice as a diagnostic tool is that of privacy and security. To eliminate this, encryption methods exist to convert patient-identifiable vocal data into a more secure, private nature. Advancements in AI have expanded the capabilities and future potential of voice as a digital health solution. Full article
Show Figures

Figure 1

20 pages, 2817 KB  
Article
Escalate Prognosis of Parkinson’s Disease Employing Wavelet Features and Artificial Intelligence from Vowel Phonation
by Rumana Islam and Mohammed Tarique
BioMedInformatics 2025, 5(2), 23; https://doi.org/10.3390/biomedinformatics5020023 - 30 Apr 2025
Viewed by 1582
Abstract
Background: This work presents an artificial intelligence-based algorithm for detecting Parkinson’s disease (PD) from voice signals. The detection of PD at pre-symptomatic stages is imperative to slow disease progression. Speech signal processing-based PD detection can play a crucial role here, as it has [...] Read more.
Background: This work presents an artificial intelligence-based algorithm for detecting Parkinson’s disease (PD) from voice signals. The detection of PD at pre-symptomatic stages is imperative to slow disease progression. Speech signal processing-based PD detection can play a crucial role here, as it has been reported in the literature that PD affects the voice quality of patients at an early stage. Hence, speech samples can be used as biomarkers of PD, provided that suitable voice features and artificial intelligence algorithms are employed. Methods: Advanced signal-processing techniques are used to extract audio features from the sustained vowel ‘/a/’ sound. The extracted audio features include baseline features, intensities, formant frequencies, bandwidths, vocal fold parameters, and Mel-frequency cepstral coefficients (MFCCs) to form a feature vector. Then, this feature vector is further enriched by including wavelet-based features to form the second feature vector. For classification purposes, two popular machine learning models, namely, support vector machine (SVM) and k-nearest neighbors (kNNs), are trained to distinguish patients with PD. Results: The results demonstrate that the inclusion of wavelet-based voice features enhances the performance of both the SVM and kNN models for PD detection. However, kNN provides better accuracy, detection speed, training time, and misclassification cost than SVM. Conclusions: This work concludes that wavelet-based voice features are important for detecting neurodegenerative diseases like PD. These wavelet features can enhance the classification performance of machine learning models. This work also concludes that kNN is recommendable over SVM for the investigated voice features, despite the inclusion and exclusion of the wavelet features. Full article
Show Figures

Figure 1

19 pages, 675 KB  
Review
Vocal Feature Changes for Monitoring Parkinson’s Disease Progression—A Systematic Review
by Helen Wright and Vered Aharonson
Brain Sci. 2025, 15(3), 320; https://doi.org/10.3390/brainsci15030320 - 19 Mar 2025
Cited by 2 | Viewed by 1757
Abstract
Background: Parkinson’s disease has a significant impact on vocal characteristics and speech patterns, making them potential biomarkers for monitoring disease progression. To effectively utilise these biomarkers, it is essential to understand how they evolve over time as this degenerative disease progresses. Objectives: This [...] Read more.
Background: Parkinson’s disease has a significant impact on vocal characteristics and speech patterns, making them potential biomarkers for monitoring disease progression. To effectively utilise these biomarkers, it is essential to understand how they evolve over time as this degenerative disease progresses. Objectives: This review aims to identify the most used vocal features in Parkinson’s disease monitoring and to track the temporal changes observed in each feature. Methods: An online database search was conducted to identify studies on voice and speech changes associated with Parkinson’s disease progression. The analysis examined the features and their temporal changes to identify potential feature classes and trends. Results: Eighteen features were identified and categorised into three main aspects of speech: articulation, phonation and prosody. While twelve of these features exhibited measurable variations in Parkinsonian voices compared to those of healthy individuals, insights into long-term changes were limited. Conclusions: Vocal features can effectively discriminate Parkinsonian voices and may be used to monitor changes through disease progression. These changes remain underexplored and necessitate more evidence from long-term studies. The additional evidence could provide clinical insights into the disease and enhance the effectiveness of automated voice-based monitoring. Full article
(This article belongs to the Special Issue New Approaches in the Exploration of Parkinson’s Disease)
Show Figures

Figure 1

18 pages, 1846 KB  
Review
Artificial Intelligence in Biomedical Engineering and Its Influence on Healthcare Structure: Current and Future Prospects
by Divya Tripathi, Kasturee Hajra, Aditya Mulukutla, Romi Shreshtha and Dipak Maity
Bioengineering 2025, 12(2), 163; https://doi.org/10.3390/bioengineering12020163 - 8 Feb 2025
Cited by 2 | Viewed by 7214
Abstract
Artificial intelligence (AI) is a growing area of computer science that combines technologies with data science to develop intelligent, highly computation-able systems. Its ability to automatically analyze and query huge sets of data has rendered it essential to many fields such as healthcare. [...] Read more.
Artificial intelligence (AI) is a growing area of computer science that combines technologies with data science to develop intelligent, highly computation-able systems. Its ability to automatically analyze and query huge sets of data has rendered it essential to many fields such as healthcare. This article introduces you to artificial intelligence, how it works, and what its central role in biomedical engineering is. It brings to light new developments in medical science, why it is being applied in biomedicine, key problems in computer vision and AI, medical applications, diagnostics, and live health monitoring. This paper starts with an introduction to artificial intelligence and its major subfields before moving into how AI is revolutionizing healthcare technology. There is a lot of emphasis on how it will transform biomedical engineering through the use of AI-based devices like biosensors. Not only can these machines detect abnormalities in a patient’s physiology, but they also allow for chronic health tracking. Further, this review also provides an overview of the trends of AI-enabled healthcare technologies and concludes that the adoption of artificial intelligence in healthcare will be very high. The most promising are in diagnostics, with highly accurate, non-invasive diagnostics such as advanced imaging and vocal biomarker analyzers leading medicine into the future. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

18 pages, 6601 KB  
Article
Dolphin Health Classifications from Whistle Features
by Brittany Jones, Jessica Sportelli, Jeremy Karnowski, Abby McClain, David Cardoso and Maximilian Du
J. Mar. Sci. Eng. 2024, 12(12), 2158; https://doi.org/10.3390/jmse12122158 - 26 Nov 2024
Cited by 4 | Viewed by 2414
Abstract
Bottlenose dolphins often conceal behavioral signs of illness until they reach an advanced stage. Motivated by the efficacy of vocal biomarkers in human health diagnostics, we utilized supervised machine learning methods to assess various model architectures’ effectiveness in classifying dolphin health status from [...] Read more.
Bottlenose dolphins often conceal behavioral signs of illness until they reach an advanced stage. Motivated by the efficacy of vocal biomarkers in human health diagnostics, we utilized supervised machine learning methods to assess various model architectures’ effectiveness in classifying dolphin health status from the acoustic features of their whistles. A gradient boosting classifier achieved a 72.3% accuracy in distinguishing between normal and abnormal health states—a significant improvement over chance (permutation test; 1000 iterations, p < 0.001). The model was trained on 30,693 whistles from 15 dolphins and the test set (15%) totaled 3612 ‘normal’ and 1775 ‘abnormal’ whistles. The classifier identified the health status of the dolphin from the whistles features with 72.3% accuracy, 73.2% recall, 56.1% precision, and a 63.5% F1 score. These findings suggest the encoding of internal health information within dolphin whistle features, with indications that the severity of illness correlates with classification accuracy, notably in its success for identifying ‘critical’ cases (94.2%). The successful development of this diagnostic tool holds promise for furnishing a passive, non-invasive, and cost-effective means for early disease detection in bottlenose dolphins. Full article
(This article belongs to the Special Issue Recent Advances in Marine Bioacoustics)
Show Figures

Figure 1

38 pages, 1732 KB  
Review
Voice as a Biomarker of Pediatric Health: A Scoping Review
by Hannah Paige Rogers, Anne Hseu, Jung Kim, Elizabeth Silberholz, Stacy Jo, Anna Dorste and Kathy Jenkins
Children 2024, 11(6), 684; https://doi.org/10.3390/children11060684 - 4 Jun 2024
Cited by 6 | Viewed by 3794
Abstract
The human voice has the potential to serve as a valuable biomarker for the early detection, diagnosis, and monitoring of pediatric conditions. This scoping review synthesizes the current knowledge on the application of artificial intelligence (AI) in analyzing pediatric voice as a biomarker [...] Read more.
The human voice has the potential to serve as a valuable biomarker for the early detection, diagnosis, and monitoring of pediatric conditions. This scoping review synthesizes the current knowledge on the application of artificial intelligence (AI) in analyzing pediatric voice as a biomarker for health. The included studies featured voice recordings from pediatric populations aged 0–17 years, utilized feature extraction methods, and analyzed pathological biomarkers using AI models. Data from 62 studies were extracted, encompassing study and participant characteristics, recording sources, feature extraction methods, and AI models. Data from 39 models across 35 studies were evaluated for accuracy, sensitivity, and specificity. The review showed a global representation of pediatric voice studies, with a focus on developmental, respiratory, speech, and language conditions. The most frequently studied conditions were autism spectrum disorder, intellectual disabilities, asphyxia, and asthma. Mel-Frequency Cepstral Coefficients were the most utilized feature extraction method, while Support Vector Machines were the predominant AI model. The analysis of pediatric voice using AI demonstrates promise as a non-invasive, cost-effective biomarker for a broad spectrum of pediatric conditions. Further research is necessary to standardize the feature extraction methods and AI models utilized for the evaluation of pediatric voice as a biomarker for health. Standardization has significant potential to enhance the accuracy and applicability of these tools in clinical settings across a variety of conditions and voice recording types. Further development of this field has enormous potential for the creation of innovative diagnostic tools and interventions for pediatric populations globally. Full article
(This article belongs to the Section Pediatric Otolaryngology)
Show Figures

Graphical abstract

13 pages, 2022 KB  
Article
The Chaperone System in Tumors of the Vocal Cords: Quantity and Distribution Changes of Hsp10, Hsp27, Hsp60, and Hsp90 during Carcinogenesis
by Alessandro Pitruzzella, Alberto Fucarino, Michele Domenico Modica, Vincenzo Luca Lentini, Claudio Vella, Stefano Burgio, Federica Calabrò, Giorgia Intili and Francesca Rappa
Appl. Sci. 2024, 14(2), 722; https://doi.org/10.3390/app14020722 - 15 Jan 2024
Viewed by 1696
Abstract
Laryngeal squamous cell carcinoma (LSCC) constitutes a noteworthy subset of head and neck cancers, contributing to about 4.5% of all malignancies. Its clinical behavior and characteristics exhibit variations contingent upon the specific anatomical site affected, with the glottis, supraglottis, and subglottis emerging as [...] Read more.
Laryngeal squamous cell carcinoma (LSCC) constitutes a noteworthy subset of head and neck cancers, contributing to about 4.5% of all malignancies. Its clinical behavior and characteristics exhibit variations contingent upon the specific anatomical site affected, with the glottis, supraglottis, and subglottis emerging as the most prevalent locations. Notably, squamous cell carcinoma represents a predominant histological type, accounting for 85% to 95% of all laryngeal cancers. The gender disparity is evident, with a higher incidence among males, exhibiting a ratio of 3.9:1. Moreover, disparities among racial groups are observed, as African American patients tend to manifest the condition at a younger age, coupled with lower overall survival rates compared to their Caucasian, Hispanic, and Asian counterparts. The primary etiological factors implicated in the onset of laryngeal cancer are tobacco and alcohol consumption, with a direct correlation to the intensity and duration of usage. Importantly, the risk diminishes gradually following cessation, necessitating a substantial period of at least 15 years for a return to baseline rates. Given the diverse nature of laryngeal SCC, treatment modalities are tailored based on the specific site and stage of the disease. Therapeutic interventions, such as radiotherapy, transoral laser microsurgery, open horizontal partial laryngectomy, or total laryngectomy, are employed with the overarching goal of preserving organ function. This study delves into the intricate realm of laryngeal SCC, specifically exploring the involvement of heat shock proteins (HSPs) in disease progression. This research meticulously examines the expression levels of Hsp10, Hsp27, Hsp60, and Hsp90 in dysplastic and benign tissue samples extracted from the right vocal cord, utilizing immunohistochemistry analysis. The focal point of the investigation revolves around unraveling the intricate role of these molecular chaperones in tissue differentiation mechanisms and cellular homeostasis, particularly within the inflammatory milieu characteristic of the tumor phenotype. The findings from this study serve as a robust histopathological foundation, paving the way for more in-depth analyses of the underlying mechanisms governing the contribution of the four chaperones to the development of squamous cell carcinoma in the larynx. Additionally, the data gleaned from this research hint at the potential of these four chaperones as valuable biomarkers, not only for diagnostic purposes but also for prognostication and ongoing patient monitoring. As our understanding of the molecular intricacies deepens, the prospect of targeted therapeutic interventions and personalized treatment strategies for laryngeal SCC becomes increasingly promising. Full article
(This article belongs to the Section Chemical and Molecular Sciences)
Show Figures

Figure 1

15 pages, 3274 KB  
Article
Unmasking Nasality to Assess Hypernasality
by Ignacio Moreno-Torres, Andrés Lozano, Rosa Bermúdez, Josué Pino, María Dolores García Méndez and Enrique Nava
Appl. Sci. 2023, 13(23), 12606; https://doi.org/10.3390/app132312606 - 23 Nov 2023
Cited by 1 | Viewed by 2483
Abstract
Automatic evaluation of hypernasality has been traditionally computed using monophonic signals (i.e., combining nose and mouth signals). Here, this study aimed to examine if nose signals serve to increase the accuracy of hypernasality evaluation. Using a conventional microphone and a Nasometer, we recorded [...] Read more.
Automatic evaluation of hypernasality has been traditionally computed using monophonic signals (i.e., combining nose and mouth signals). Here, this study aimed to examine if nose signals serve to increase the accuracy of hypernasality evaluation. Using a conventional microphone and a Nasometer, we recorded monophonic, mouth, and nose signals. Three main analyses were performed: (1) comparing the spectral distance between oral/nasalized vowels in monophonic, nose, and mouth signals; (2) assessing the accuracy of Deep Neural Network (DNN) models in classifying oral/nasal sounds and vowel/consonant sounds trained with nose, mouth, and monophonic signals; (3) analyzing the correlation between DNN-derived nasality scores and expert-rated hypernasality scores. The distance between oral and nasalized vowels was the highest in the nose signals. Moreover, DNN models trained on nose signals outperformed in nasal/oral classification (accuracy: 0.90), but were slightly less precise in vowel/consonant differentiation (accuracy: 0.86) compared to models trained on other signals. A strong Pearson’s correlation (0.83) was observed between nasality scores from DNNs trained with nose signals and human expert ratings, whereas those trained on mouth signals showed a weaker correlation (0.36). We conclude that mouth signals partially mask the nasality information carried by nose signals. Significance: the accuracy of hypernasality assessment tools may improve by analyzing nose signals. Full article
(This article belongs to the Special Issue Advances in Speech and Language Processing)
Show Figures

Figure 1

16 pages, 1023 KB  
Review
Oxidative Stress in Obstructive Sleep Apnea Syndrome: Putative Pathways to Hearing System Impairment
by Pierluigi Mastino, Davide Rosati, Giulia de Soccio, Martina Romeo, Daniele Pentangelo, Stefano Venarubea, Marco Fiore, Piero Giuseppe Meliante, Carla Petrella, Christian Barbato and Antonio Minni
Antioxidants 2023, 12(7), 1430; https://doi.org/10.3390/antiox12071430 - 15 Jul 2023
Cited by 15 | Viewed by 3552
Abstract
Introduction: OSAS is a disease that affects 2% of men and 4% of women of middle age. It is a major health public problem because untreated OSAS could lead to cardiovascular, metabolic, and cerebrovascular complications. The more accepted theory relates to oxidative stress [...] Read more.
Introduction: OSAS is a disease that affects 2% of men and 4% of women of middle age. It is a major health public problem because untreated OSAS could lead to cardiovascular, metabolic, and cerebrovascular complications. The more accepted theory relates to oxidative stress due to intermittent hypoxia, which leads, after an intense inflammatory response through multiple pathways, to endothelial damage. The objective of this study is to demonstrate a correlation between OSAS and hearing loss, the effect of the CPAP on hearing function, and if oxidative stress is also involved in the damaging of the hearing system. Methods: A review of the literature has been executed. Eight articles have been found, where seven were about the correlation between OSAS and the hearing system, and only one was about the CPAP effects. It is noted that two of the eight articles explored the theory of oxidative stress due to intermittent hypoxia. Results: All studies showed a significant correlation between OSAS and hearing function (p < 0.05).Conclusions: Untreated OSAS affects the hearing system at multiple levels. Oxidative stress due to intermittent hypoxia is the main pathogenetic mechanism of damage. CPAP has no effects (positive or negative) on hearing function. More studies are needed, with the evaluation of extended high frequencies, the execution of vocal audiometry in noisy environments, and the evaluation of potential biomarkers due to oxidative stress. Full article
Show Figures

Figure 1

17 pages, 3065 KB  
Review
Acoustic Monitoring of Professionally Managed Marine Mammals for Health and Welfare Insights
by Kelley A. Winship and Brittany L. Jones
Animals 2023, 13(13), 2124; https://doi.org/10.3390/ani13132124 - 27 Jun 2023
Cited by 9 | Viewed by 4174
Abstract
Research evaluating marine mammal welfare and opportunities for advancements in the care of species housed in a professional facility have rapidly increased in the past decade. While topics, such as comfortable housing, adequate social opportunities, stimulating enrichment, and a high standard of medical [...] Read more.
Research evaluating marine mammal welfare and opportunities for advancements in the care of species housed in a professional facility have rapidly increased in the past decade. While topics, such as comfortable housing, adequate social opportunities, stimulating enrichment, and a high standard of medical care, have continued to receive attention from managers and scientists, there is a lack of established acoustic consideration for monitoring the welfare of these animals. Marine mammals rely on sound production and reception for navigation and communication. Regulations governing anthropogenic sound production in our oceans have been put in place by many countries around the world, largely based on the results of research with managed and trained animals, due to the potential negative impacts that unrestricted noise can have on marine mammals. However, there has not been an established best practice for the acoustic welfare monitoring of marine mammals in professional care. By monitoring animal hearing and vocal behavior, a more holistic view of animal welfare can be achieved through the early detection of anthropogenic sound sources, the acoustic behavior of the animals, and even the features of the calls. In this review, the practice of monitoring cetacean acoustic welfare through behavioral hearing tests and auditory evoked potentials (AEPs), passive acoustic monitoring, such as the Welfare Acoustic Monitoring System (WAMS), as well as ideas for using advanced technologies for utilizing vocal biomarkers of health are introduced and reviewed as opportunities for integration into marine mammal welfare plans. Full article
(This article belongs to the Special Issue Zoo and Aquarium Welfare, Ethics, Behavior)
Show Figures

Figure 1

Back to TopTop