Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = sustained vowel phonations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 3866 KB  
Article
An Acoustic Simulation Method of the Japanese Vowels /i/ and /u/ by Using the Boundary Element Method
by Mami Shiraishi, Katsuaki Mishima, Masahiro Takekawa, Masaaki Mori and Hirotsugu Umeda
Acoustics 2023, 5(2), 553-562; https://doi.org/10.3390/acoustics5020033 - 6 Jun 2023
Cited by 1 | Viewed by 2539
Abstract
This study aimed to establish and verify the validity of an acoustic simulation method during sustained phonation of the Japanese vowels /i/ and /u/. The study participants were six healthy adults. First, vocal tract models were constructed based on computed tomography (CT) data, [...] Read more.
This study aimed to establish and verify the validity of an acoustic simulation method during sustained phonation of the Japanese vowels /i/ and /u/. The study participants were six healthy adults. First, vocal tract models were constructed based on computed tomography (CT) data, such as the range from the frontal sinus to the glottis, during sustained phonation of /i/ and /u/. To imitate the trachea, after being virtually extended by 12 cm, cylindrical shapes were then added to the vocal tract models between the tracheal bifurcation and the lower part of the glottis. Next, the boundary element method and the Kirchhoff–Helmholtz integral equation were used for discretization and to represent the wave equation for sound propagation, respectively. As a result, the relative discrimination thresholds of the vowel formant frequencies for /i/ and /u/ against actual voice were 1.1–10.2% and 0.4–9.3% for the first formant and 3.9–7.5% and 5.0–12.5% for the second formant, respectively. In the vocal tract model with nasal coupling, a pole–zero pair was observed at around 500 Hz, and for both /i/ and /u/, a pole–zero pair was observed at around 1000 Hz regardless of the presence or absence of nasal coupling. Therefore, the boundary element method, which produces solutions by analysis of boundary problems rather than three-dimensional aspects, was thought to be effective for simulating the Japanese vowels /i/ and /u/ with high validity for the vocal tract models encompassing a wide range, from the frontal sinuses to the trachea, constructed from CT data obtained during sustained phonation. Full article
Show Figures

Figure 1

27 pages, 95915 KB  
Article
Mapping Phonation Types by Clustering of Multiple Metrics
by Huanchen Cai and Sten Ternström
Appl. Sci. 2022, 12(23), 12092; https://doi.org/10.3390/app122312092 - 25 Nov 2022
Cited by 6 | Viewed by 2310
Abstract
For voice analysis, much work has been undertaken with a multitude of acoustic and electroglottographic metrics. However, few of these have proven to be robustly correlated with physical and physiological phenomena. In particular, all metrics are affected by the fundamental frequency and sound [...] Read more.
For voice analysis, much work has been undertaken with a multitude of acoustic and electroglottographic metrics. However, few of these have proven to be robustly correlated with physical and physiological phenomena. In particular, all metrics are affected by the fundamental frequency and sound level, making voice assessment sensitive to the recording protocol. It was investigated whether combinations of metrics, acquired over voice maps rather than with individual sustained vowels, can offer a more functional and comprehensive interpretation. For this descriptive, retrospective study, 13 men, 13 women, and 22 children were instructed to phonate on /a/ over their full voice range. Six acoustic and EGG signal features were obtained for every phonatory cycle. An unsupervised voice classification model created feature clusters, which were then displayed on voice maps. It was found that the feature clusters may be readily interpreted in terms of phonation types. For example, the typical intense voice has a high peak EGG derivative, a relatively high contact quotient, low EGG cycle-rate entropy, and a high cepstral peak prominence in the voice signal, all represented by one cluster centroid that is mapped to a given color. In a transition region between the non-contacting and contacting of the vocal folds, the combination of metrics shows a low contact quotient and relatively high entropy, which can be mapped to a different color. Based on this data set, male phonation types could be clustered into up to six categories and female and child types into four. Combining acoustic and EGG metrics resolved more categories than either kind on their own. The inter- and intra-participant distributional features are discussed. Full article
(This article belongs to the Special Issue Current Trends and Future Directions in Voice Acoustics Measurement)
Show Figures

Figure 1

18 pages, 3845 KB  
Article
Horizontal and Vertical Voice Directivity Characteristics of Sung Vowels in Classical Singing
by Manuel Brandner, Matthias Frank and Alois Sontacchi
Acoustics 2022, 4(4), 849-866; https://doi.org/10.3390/acoustics4040051 - 1 Oct 2022
Cited by 6 | Viewed by 3964
Abstract
Singing voice directivity for five sustained German vowels /a:/, /e:/, /i:/, /o:/, /u:/ over a wide pitch range was investigated using a multichannel microphone array with high spatial resolution along the horizontal and vertical axes. A newly created dataset allows to examine voice [...] Read more.
Singing voice directivity for five sustained German vowels /a:/, /e:/, /i:/, /o:/, /u:/ over a wide pitch range was investigated using a multichannel microphone array with high spatial resolution along the horizontal and vertical axes. A newly created dataset allows to examine voice directivity in classical singing with high resolution in angle and frequency. Three voice production modes (phonation modes) modal, breathy, and pressed that could affect the used mouth opening and voice directivity were investigated. We present detailed results for singing voice directivity and introduce metrics to discuss the differences of complex voice directivity patterns of the whole data in a more compact form. Differences were found between vowels, pitch, and gender (voice types with corresponding vocal range). Differences between the vowels /a:, e:, i:/ and /o:, u:/ and pitch can be addressed by simplified metrics up to about d2/D5/587 Hz, but we found that voice directivity generally depends strongly on pitch. Minor differences were found between voice production modes and found to be more pronounced for female singers. Voice directivity differs at low pitch between vowels with front vowels being most directional. We found that which of the front vowels is most directional depends on the evaluated pitch. This seems to be related to the complex radiation pattern of the human voice, which involves a large inter-subjective variability strongly influenced by the shape of the torso, head, and mouth. All recorded classical sung vowels at high pitches exhibit similar high directionality. Full article
(This article belongs to the Special Issue Acoustics, Speech and Signal Processing)
Show Figures

Figure 1

23 pages, 4369 KB  
Article
Segmentation of Glottal Images from High-Speed Videoendoscopy Optimized by Synchronous Acoustic Recordings
by Bartosz Kopczynski, Ewa Niebudek-Bogusz, Wioletta Pietruszewska and Pawel Strumillo
Sensors 2022, 22(5), 1751; https://doi.org/10.3390/s22051751 - 23 Feb 2022
Cited by 4 | Viewed by 2883
Abstract
Laryngeal high-speed videoendoscopy (LHSV) is an imaging technique offering novel visualization quality of the vibratory activity of the vocal folds. However, in most image analysis methods, the interaction of the medical personnel and access to ground truth annotations are required to achieve accurate [...] Read more.
Laryngeal high-speed videoendoscopy (LHSV) is an imaging technique offering novel visualization quality of the vibratory activity of the vocal folds. However, in most image analysis methods, the interaction of the medical personnel and access to ground truth annotations are required to achieve accurate detection of vocal folds edges. In our fully automatic method, we combine video and acoustic data that are synchronously recorded during the laryngeal endoscopy. We show that the image segmentation algorithm of the glottal area can be optimized by matching the Fourier spectra of the pre-processed video and the spectra of the acoustic recording during the phonation of sustained vowel /i:/. We verify our method on a set of LHSV recordings taken from subjects with normophonic voice and patients with voice disorders due to glottal insufficiency. We show that the computed geometric indices of the glottal area make it possible to discriminate between normal and pathologic voices. The median of the Open Quotient and Minimal Relative Glottal Area values for healthy subjects were 0.69 and 0.06, respectively, while for dysphonic subjects were 1 and 0.35, respectively. We also validate these results using independent phoniatrician experts. Full article
Show Figures

Figure 1

32 pages, 1711 KB  
Article
Things to Consider When Automatically Detecting Parkinson’s Disease Using the Phonation of Sustained Vowels: Analysis of Methodological Issues
by Alex S. Ozbolt, Laureano Moro-Velazquez, Ioan Lina, Ankur A. Butala and Najim Dehak
Appl. Sci. 2022, 12(3), 991; https://doi.org/10.3390/app12030991 - 19 Jan 2022
Cited by 24 | Viewed by 3984
Abstract
Diagnosing Parkinson’s Disease (PD) necessitates monitoring symptom progression. Unfortunately, diagnostic confirmation often occurs years after disease onset. A more sensitive and objective approach is paramount to the expedient diagnosis and treatment of persons with PD (PwPDs). Recent studies have shown that we can [...] Read more.
Diagnosing Parkinson’s Disease (PD) necessitates monitoring symptom progression. Unfortunately, diagnostic confirmation often occurs years after disease onset. A more sensitive and objective approach is paramount to the expedient diagnosis and treatment of persons with PD (PwPDs). Recent studies have shown that we can train accurate models to detect signs of PD from audio recordings of confirmed PwPDs. However, disparities exist between studies and may be caused, in part, by differences in employed corpora or methodologies. Our hypothesis is that unaccounted covariates in methodology, experimental design, and data preparation resulted in overly optimistic results in studies of PD automatic detection employing sustained vowels. These issues include record-wise fold creation rather than subject-wise; an imbalance of age between the PwPD and control classes; using too small of a corpus compared to the sizes of feature vectors; performing cross-validation without including development data; and the absence of cross-corpora testing to confirm results. In this paper, we evaluate the influence of these methodological issues in the automatic detection of PD employing sustained vowels. We perform several experiments isolating each issue to measure its influence employing three different corpora. Moreover, we analyze if the perceived dysphonia of the speakers could be causing differences in results between the corpora. Results suggest that each independent methodological issue analyzed has an effect on classification accuracy. Consequently, we recommend a list of methodological steps to be considered in future experiments to avoid overoptimistic or misleading results. Full article
(This article belongs to the Special Issue Applications of Speech and Language Technologies in Healthcare)
Show Figures

Figure 1

13 pages, 1526 KB  
Article
Assessing Parkinson’s Disease at Scale Using Telephone-Recorded Speech: Insights from the Parkinson’s Voice Initiative
by Siddharth Arora and Athanasios Tsanas
Diagnostics 2021, 11(10), 1892; https://doi.org/10.3390/diagnostics11101892 - 14 Oct 2021
Cited by 30 | Viewed by 5377
Abstract
Numerous studies have reported on the high accuracy of using voice tasks for the remote detection and monitoring of Parkinson’s Disease (PD). Most of these studies, however, report findings on a small number of voice recordings, often collected under acoustically controlled conditions, and [...] Read more.
Numerous studies have reported on the high accuracy of using voice tasks for the remote detection and monitoring of Parkinson’s Disease (PD). Most of these studies, however, report findings on a small number of voice recordings, often collected under acoustically controlled conditions, and therefore cannot scale at large without specialized equipment. In this study, we aimed to evaluate the potential of using voice as a population-based PD screening tool in resource-constrained settings. Using the standard telephone network, we processed 11,942 sustained vowel /a/ phonations from a US-English cohort comprising 1078 PD and 5453 control participants. We characterized each phonation using 304 dysphonia measures to quantify a range of vocal impairments. Given that this is a highly unbalanced problem, we used the following strategy: we selected a balanced subset (n = 3000 samples) for training and testing using 10-fold cross-validation (CV), and the remaining (unbalanced held-out dataset, n = 8942) samples for further model validation. Using robust feature selection methods we selected 27 dysphonia measures to present into a radial-basis-function support vector machine and demonstrated differentiation of PD participants from controls with 67.43% sensitivity and 67.25% specificity. These findings could help pave the way forward toward the development of an inexpensive, remote, and reliable diagnostic support tool for PD using voice as a digital biomarker. Full article
(This article belongs to the Special Issue Machine Learning Approaches for Neurodegenerative Diseases Diagnosis)
Show Figures

Figure 1

12 pages, 911 KB  
Article
A Case of Specificity: How Does the Acoustic Voice Quality Index Perform in Normophonic Subjects?
by Christina Batthyany, Youri Maryn, Ilse Trauwaen, Els Caelenberghe, Joost van Dinther, Andrzej Zarowski and Floris Wuyts
Appl. Sci. 2019, 9(12), 2527; https://doi.org/10.3390/app9122527 - 21 Jun 2019
Cited by 30 | Viewed by 5387
Abstract
The acoustic voice quality index (AVQI) is a multiparametric tool based on six acoustic measurements to quantify overall voice quality in an objective manner, with the smoothed version of the cepstral peak prominence (CPPS) as its main contributor. In the last decade, many [...] Read more.
The acoustic voice quality index (AVQI) is a multiparametric tool based on six acoustic measurements to quantify overall voice quality in an objective manner, with the smoothed version of the cepstral peak prominence (CPPS) as its main contributor. In the last decade, many studies demonstrated its robust diagnostic accuracy and high sensitivity to voice changes across voice therapy in different languages. The aim of the present study was to provide information regarding AVQI’s and CPPS’s performance in normophonic non-treatment-seeking subjects, since these data are still scarce; concatenated voice samples, consisting of sustained vowel phonation and continuous speech, from 123 subjects (72 females, 51 males; between 20 and 60 years old) without vocally relevant complaints were evaluated by three raters and run in AVQI v.02.06. According to this auditory-perceptual evaluation, two cohorts were set up (normophonia versus slight perceived dysphonia). First, gender effects were investigated. Secondly, between-cohort differences in AVQI and CPPS were investigated. Thirdly, with the number of judges giving G = 1 to partition three sub-levels of slight hoarseness as an independent factor, differences in AVQI and CPPS across these sub-levels were investigated; for AVQI, no significant gender effect was found, whereas, for CPPS, significant trends were observed. For both AVQI and CPPS, no significant differences were found between normophonic and slightly dysphonic subjects. For AVQI, however, this difference did approach significance; these findings emphasize the need for a normative study with a greater sample size and subsequently greater statistical power to detect possible significant effects and differences. Full article
(This article belongs to the Special Issue Computational Methods and Engineering Solutions to Voice)
Show Figures

Figure 1

18 pages, 397 KB  
Article
Changes in Phonation and Their Relations with Progress of Parkinson’s Disease
by Zoltan Galaz, Jiri Mekyska, Vojtech Zvoncak, Jan Mucha, Tomas Kiska, Zdenek Smekal, Ilona Eliasova, Martina Mrackova, Milena Kostalova, Irena Rektorova, Marcos Faundez-Zanuy, Jesus B. Alonso-Hernandez and Pedro Gomez-Vilda
Appl. Sci. 2018, 8(12), 2339; https://doi.org/10.3390/app8122339 - 22 Nov 2018
Cited by 27 | Viewed by 5304
Abstract
Hypokinetic dysarthria, which is associated with Parkinson’s disease (PD), affects several speech dimensions, including phonation. Although the scientific community has dealt with a quantitative analysis of phonation in PD patients, a complex research revealing probable relations between phonatory features and progress of PD [...] Read more.
Hypokinetic dysarthria, which is associated with Parkinson’s disease (PD), affects several speech dimensions, including phonation. Although the scientific community has dealt with a quantitative analysis of phonation in PD patients, a complex research revealing probable relations between phonatory features and progress of PD is missing. Therefore, the aim of this study is to explore these relations and model them mathematically to be able to estimate progress of PD during a two-year follow-up. We enrolled 51 PD patients who were assessed by three commonly used clinical scales. In addition, we quantified eight possible phonatory disorders in five vowels. To identify the relationship between baseline phonatory features and changes in clinical scores, we performed a partial correlation analysis. Finally, we trained XGBoost models to predict the changes in clinical scores during a two-year follow-up. For two years, the patients’ voices became more aperiodic with increased microperturbations of frequency and amplitude. Next, the XGBoost models were able to predict changes in clinical scores with an error in range 11–26%. Although we identified some significant correlations between changes in phonatory features and clinical scores, they are less interpretable. This study suggests that it is possible to predict the progress of PD based on the acoustic analysis of phonation. Moreover, it recommends utilizing the sustained vowel /i/ instead of /a/. Full article
Show Figures

Graphical abstract

Back to TopTop