Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (29)

Search Parameters:
Keywords = monosyllabic words

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1589 KB  
Article
Articulatory Control by Gestural Coupling and Syllable Pulses
by Christopher Geissler
Languages 2025, 10(9), 219; https://doi.org/10.3390/languages10090219 - 29 Aug 2025
Viewed by 582
Abstract
Explaining the relative timing of consonant and vowel articulations (C-V timing) is an important function of speech production models. This article explores how C-V timing might be studied from the perspective of the C/D Model, particularly the prediction that articulations are coordinated with [...] Read more.
Explaining the relative timing of consonant and vowel articulations (C-V timing) is an important function of speech production models. This article explores how C-V timing might be studied from the perspective of the C/D Model, particularly the prediction that articulations are coordinated with respect to an abstract syllable pulse. Gestural landmarks were extracted from kinematic data from English CVC monosyllabic words in the Wisconsin X-Ray Microbeam Corpus. The syllable pulse was identified using velocity peaks, and temporal lags were calculated among landmarks and the syllable pulse. The results directly follow from the procedure used to identify pulses: onset consonants exhibited stable timing to the pulse, while vowel-to-pulse timing was comparably stable with respect to C-V timing. Timing relationships with jaw displacement and jaw-based syllable pulse metrics were also explored. These results highlight current challenges for the C/D Model, as well as opportunities for elaborating the model to account for C-V timing. Full article
(This article belongs to the Special Issue Research on Articulation and Prosodic Structure)
Show Figures

Figure 1

15 pages, 3561 KB  
Data Descriptor
Acoustic Data on Vowel Nasalization Across Prosodic Conditions in L1 Korean and L2 English by Native Korean Speakers
by Jiyoung Jang, Sahyang Kim and Taehong Cho
Data 2025, 10(6), 82; https://doi.org/10.3390/data10060082 - 23 May 2025
Viewed by 863
Abstract
This article presents acoustic data on coarticulatory vowel nasalization from the productions of twelve L1 Korean speakers and of fourteen Korean learners of L2 English. The dataset includes eight monosyllabic target words embedded in eight carrier sentences, each repeated four times per speaker. [...] Read more.
This article presents acoustic data on coarticulatory vowel nasalization from the productions of twelve L1 Korean speakers and of fourteen Korean learners of L2 English. The dataset includes eight monosyllabic target words embedded in eight carrier sentences, each repeated four times per speaker. Half of the words contain a nasal coda such as p*am in Korean and bomb in English and the other half a nasal onset such as mat in Korean and mob in English. These were produced under varied prosodic conditions, including three phrase positions and two focus conditions, enabling analysis of prosodic effects on vowel nasalization across languages along with individual speaker variation. The accompanying CSV files provide acoustic measurements such as nasal consonant duration, A1-P0, and normalized A1-P0 at multiple timepoints within the vowel. While theoretical implications have been discussed in two published studies, the full dataset is published here. By making these data publicly available, we aim to promote broad reuse and encourage further research at the intersection of prosody, phonetics, and second language acquisition—ultimately advancing our understanding of how phonetic patterns emerge, transfer, and vary across languages and learners. Full article
Show Figures

Figure 1

12 pages, 964 KB  
Article
A Machine Learning Model to Predict Postoperative Speech Recognition Outcomes in Cochlear Implant Recipients: Development, Validation, and Comparison with Expert Clinical Judgment
by Alexey Demyanchuk, Eugen Kludt, Thomas Lenarz and Andreas Büchner
J. Clin. Med. 2025, 14(11), 3625; https://doi.org/10.3390/jcm14113625 - 22 May 2025
Cited by 2 | Viewed by 912
Abstract
Background/Objectives: Cochlear implantation (CI) significantly enhances speech perception and quality of life in patients with severe-to-profound sensorineural hearing loss, yet outcomes vary substantially. Accurate preoperative prediction of CI outcomes remains challenging. This study aimed to develop and validate a machine learning model [...] Read more.
Background/Objectives: Cochlear implantation (CI) significantly enhances speech perception and quality of life in patients with severe-to-profound sensorineural hearing loss, yet outcomes vary substantially. Accurate preoperative prediction of CI outcomes remains challenging. This study aimed to develop and validate a machine learning model predicting postoperative speech recognition using a large, single-center dataset. Additionally, we compared model performance with expert clinical predictions to evaluate potential clinical utility. Methods: We retrospectively analyzed data from 2571 adult patients with postlingual hearing loss who received their cochlear implant between 2000 and 2022 at Hannover Medical School, Germany. A decision tree regression model was trained to predict monosyllabic (MS) word recognition score one to two years post-implantation using preoperative clinical variables (age, duration of deafness, preoperative MS score, pure tone average, onset type, and contralateral implantation status). Model evaluation was performed using a random data split (10%), a chronological future cohort (patients implanted after 2020), and a subset where experienced audiologists predicted outcomes for comparison. Results: The model achieved a mean absolute error (MAE) of 17.3% on the random test set and 17.8% on the chronological test set, demonstrating robust predictive performance over time. Compared to expert audiologist predictions, the model showed similar accuracy (MAE: 19.1% for the model vs. 18.9% for experts), suggesting comparable effectiveness. Conclusions: Our machine learning model reliably predicts postoperative speech outcomes and matches expert clinical predictions, highlighting its potential for supporting clinical decision-making. Future research should include external validation and prospective trials to further confirm clinical applicability. Full article
(This article belongs to the Special Issue The Challenges and Prospects in Cochlear Implantation)
Show Figures

Figure 1

14 pages, 2743 KB  
Article
The Influence of Vowels on the Identification of Spoken Disyllabic Words in the Malayalam Language for Individuals with Hearing Loss
by Vijaya Kumar Narne, Dhanya Mohan, M. Badariya, Sruthi Das Avileri, Saransh Jain, Sunil Kumar Ravi, Yerraguntla Krishna, Reesha Oovattil Hussain and Abdulaziz Almudhi
Diagnostics 2024, 14(23), 2707; https://doi.org/10.3390/diagnostics14232707 - 30 Nov 2024
Cited by 1 | Viewed by 844
Abstract
Background/Objectives: The present study investigates the reasons for better recognition of disyllabic words in Malayalam among individuals with hearing loss. This research was conducted in three experiments. Experiment 1 measured the psychometric properties (slope, intercept, and maximum scores) of disyllabic wordlists. Experiment 2 [...] Read more.
Background/Objectives: The present study investigates the reasons for better recognition of disyllabic words in Malayalam among individuals with hearing loss. This research was conducted in three experiments. Experiment 1 measured the psychometric properties (slope, intercept, and maximum scores) of disyllabic wordlists. Experiment 2 examined PBmax scores across varying degrees of sensorineural hearing loss (SNHL) and compared these findings with studies in other Indian and global languages. Experiment 3 analyzed the recognition performance of different vowel combinations across varying degrees of hearing loss. Methods: Experiment 1: Psychometric functions for disyllabic word recognition were derived from 45 individuals with normal hearing. Word recognition was tested in quiet at nine hearing levels ranging from −10 to +40 dB HL. Experiment 2: 1000 participants with SNHL were categorized by hearing loss severity (mild, moderate, moderately severe, severe, and profound). Word recognition scores, including PBmax, were analyzed and compared across severity levels. Experiment 3: Percent error scores for 17 vowel combinations were assessed in 37 participants with SNHL. Ten disyllabic words represented each combination. Results: Disyllabic wordlists showed significantly higher word recognition scores than monosyllabic lists across all degrees of hearing loss. Individuals with mild-to-moderately severe SNHL achieved higher PBmax scores, with performance declining at severe- and profound-loss levels. The higher recognition of disyllabic words was attributed to contextual cues and low-frequency vowel-based information, particularly benefiting those with residual low-frequency hearing. Error analysis highlighted the influence of specific vowel combinations on word recognition performance. Conclusions: Disyllabic words are easier to recognize than monosyllabic words for individuals with SNHL due to their rich contextual and low-frequency energy cues. Disyllabic wordlists sustain higher recognition scores up to moderately severe hearing loss but show a marked decline with more severe losses. The phonemic balance of wordlists and vowel combinations significantly influences word recognition, emphasizing the importance of these factors in developing wordlists for clinical use. Full article
Show Figures

Figure 1

25 pages, 1571 KB  
Article
Unfolding Prosody Guides the Development of Word Segmentation
by Sónia Frota, Cátia Severino and Marina Vigário
Languages 2024, 9(9), 305; https://doi.org/10.3390/languages9090305 - 19 Sep 2024
Cited by 2 | Viewed by 1890
Abstract
Prosody is known to scaffold the learning of language, and thus understanding prosodic development is vital for language acquisition. The present study explored the unfolding prosody model of prosodic development (proposed in Frota’s et al. study in 2016) beyond early production data, to [...] Read more.
Prosody is known to scaffold the learning of language, and thus understanding prosodic development is vital for language acquisition. The present study explored the unfolding prosody model of prosodic development (proposed in Frota’s et al. study in 2016) beyond early production data, to examine whether it predicted the development of early segmentation abilities. European Portuguese-learning infants aged between 5 and 17 months were tested in a series of word segmentation experiments. Developing prosodic structure was evidenced in word segmentation as proposed by the unfolding model: (i) a simple monosyllabic word shape crucially placed at a major prosodic edge was segmented first, before more complex word shapes under similar prosodic conditions; (ii) the segmentation of more complex words was easier at a major prosodic edge than in phrase-medial position; and (iii) the segmentation of complex words with an iambic pattern preceded the segmentation of words with a trochaic pattern. These findings demonstrated that word segmentation evolved with unfolding prosody, suggesting that the prosodic units developed in the unfolding process are used both as speech production planning units and to extract word-forms from continuous speech. Therefore, our study contributes to a better understanding of the mechanisms underlying word segmentation, and to a better understanding of early prosodic development, a cornerstone of language acquisition. Full article
(This article belongs to the Special Issue Phonetic and Phonological Complexity in Romance Languages)
Show Figures

Figure 1

13 pages, 755 KB  
Article
Factors to Describe the Outcome Characteristics of a CI Recipient
by Matthias Hey, Kevyn Kogel, Jan Dambon, Alexander Mewes, Tim Jürgens and Thomas Hocke
J. Clin. Med. 2024, 13(15), 4436; https://doi.org/10.3390/jcm13154436 - 29 Jul 2024
Cited by 1 | Viewed by 1138
Abstract
Background: In cochlear implant (CI) treatment, there is a large variability in outcome. The aim of our study was to identify the independent audiometric measures that are most directly relevant for describing this variability in outcome characteristics of CI recipients. An extended audiometric [...] Read more.
Background: In cochlear implant (CI) treatment, there is a large variability in outcome. The aim of our study was to identify the independent audiometric measures that are most directly relevant for describing this variability in outcome characteristics of CI recipients. An extended audiometric test battery was used with selected adult patients in order to characterize the full range of CI outcomes. Methods: CI users were recruited for this study on the basis of their postoperative results and divided into three groups: low (1st quartile), moderate (medium decentile), and high hearing performance (4th quartile). Speech recognition was measured in quiet by using (i) monosyllabic words (40–80 dB SPL), (ii) speech reception threshold (SRT) for numbers, and (iii) the German matrix test in noise. In order to reconstruct demanding everyday listening situations in the clinic, the temporal characteristics of the background noise and the spatial arrangements of the signal sources were varied for tests in noise. In addition, a survey was conducted using the Speech, Spatial, and Qualities (SSQ) questionnaire and the Listening Effort (LE) questionnaire. Results: Fifteen subjects per group were examined (total N = 45), who did not differ significantly in terms of age, time after CI surgery, or CI use behavior. The groups differed mainly in the results of speech audiometry. For speech recognition, significant differences were found between the three groups for the monosyllabic tests in quiet and for the sentences in stationary (S0°N0°) and fluctuating (S0°NCI) noise. Word comprehension and sentence comprehension in quiet were both strongly correlated with the SRT in noise. This observation was also confirmed by a factor analysis. No significant differences were found between the three groups for the SSQ questionnaire and the LE questionnaire results. The results of the factor analysis indicate that speech recognition in noise provides information highly comparable to information from speech intelligibility in quiet. Conclusions: The factor analysis highlighted three components describing the postoperative outcome of CI patients. These were (i) the audiometrically measured supra-threshold speech recognition and (ii) near-threshold audibility, as well as (iii) the subjective assessment of the relationship to real life as determined by the questionnaires. These parameters appear well suited to setting up a framework for a test battery to assess CI outcomes. Full article
Show Figures

Figure 1

32 pages, 11315 KB  
Article
Correspondence of Consonant Clustering with Particular Vowels in German Dialects
by Samantha Link
Languages 2024, 9(7), 255; https://doi.org/10.3390/languages9070255 - 22 Jul 2024
Viewed by 1738
Abstract
Recent work found a correspondence between consonant clustering probability in monosyllabic lexemes and the three vowel types, short and long monophthong and diphthong, in German dialects. Furthermore, that correspondence was found to be bound to a North–South divide. This paper explores the preferences [...] Read more.
Recent work found a correspondence between consonant clustering probability in monosyllabic lexemes and the three vowel types, short and long monophthong and diphthong, in German dialects. Furthermore, that correspondence was found to be bound to a North–South divide. This paper explores the preferences in consonant clustering of particular vowels by analyzing the PhonD2-Corpus, a large database of phonotactic and morphological information. The clustering probability of the diphthongs is positively correlated with frequency while the other vowels showed particular preferences that are not positively correlated with frequency. However, all of them are determined by a threefold pattern: short monophthongs prefer coda clusters, diphthongs onset clusters and long monophthong are balanced. Furthermore, it was found that this threefold pattern seems to have evolved from an originally twofold pattern (short monophthong prefers coda clusters and long monophthong and diphthong prefer onset clusters) in Middle High and Low German. This result is then further considered under the aspect of the compensation of the syllable weight and moraicity. Furthermore, some interesting parallels with the syllable vs. word-language typology framework are noted. Full article
Show Figures

Figure 1

12 pages, 1147 KB  
Article
Hebrew Digits in Noise (DIN) Test in Cochlear Implant Users and Normal Hearing Listeners
by Riki Taitelbaum-Swead and Leah Fostick
Audiol. Res. 2024, 14(3), 457-468; https://doi.org/10.3390/audiolres14030038 - 20 May 2024
Cited by 1 | Viewed by 1450
Abstract
This study aimed to compare the Hebrew version of the digits-in-noise (DIN) thresholds among cochlear implant (CI) users and their normal-hearing (NH) counterparts, explore the influence of age on these thresholds, examine the effects of early auditory exposure versus its absence on DIN [...] Read more.
This study aimed to compare the Hebrew version of the digits-in-noise (DIN) thresholds among cochlear implant (CI) users and their normal-hearing (NH) counterparts, explore the influence of age on these thresholds, examine the effects of early auditory exposure versus its absence on DIN threshold, and assess the correlation between DIN thresholds and other speech perception tests. A total of 13 children with CI (aged 5.5–11 years), 15 pre-lingual CI users (aged 14–30 years), and 15 post-lingual CI users (aged 22–77 years), and their age-matched NH controls (n = 45) participated in the study. Speech perception tasks, including the DIN test, one-syllable word test, and sentence identification tasks in various auditory conditions, served as the main outcome measures. The results indicated that CI users exhibited higher speech reception thresholds in noise across all age groups compared to NH peers, with no significant difference between pre-lingual and post-lingual CI users. Significant differences were also observed in monosyllabic word and sentence accuracy in both quiet and noise conditions between CI and NH groups. Furthermore, correlations were observed between the DIN and other speech perception tests. The study concludes that CI users require a notably higher signal-to-noise ratio to discern digits in noise, underscoring the DIN test’s utility in assessing speech recognition capabilities in CI users while emphasizing the need for a comprehensive test battery to fully gauge their speech perception abilities. Full article
(This article belongs to the Special Issue Rehabilitation of Hearing Impairment: 2nd Edition)
Show Figures

Figure 1

23 pages, 4558 KB  
Article
Diachronic Semantic Tracking for Chinese Words and Morphemes over Centuries
by Yang Chi, Fausto Giunchiglia and Hao Xu
Electronics 2024, 13(9), 1728; https://doi.org/10.3390/electronics13091728 - 30 Apr 2024
Viewed by 1680
Abstract
Lexical semantic changes spanning centuries can reveal the complicated developing process of language and social culture. In recent years, natural language processing (NLP) methods have been applied in this field to provide insight into the diachronic frequency change for word senses from large-scale [...] Read more.
Lexical semantic changes spanning centuries can reveal the complicated developing process of language and social culture. In recent years, natural language processing (NLP) methods have been applied in this field to provide insight into the diachronic frequency change for word senses from large-scale historical corpus, for instance, analyzing which senses appear, increase, or decrease at which times. However, there is still a lack of Chinese diachronic corpus and dataset in this field to support supervised learning and text mining, and at the method level, few existing works analyze the Chinese semantic changes at the level of morpheme. This paper constructs a diachronic Chinese dataset for semantic tracking applications spanning 3000 years and extends the existing framework to the level of Chinese characters and morphemes, which contains four main steps of contextual sense representation, sense identification, morpheme sense mining, and diachronic semantic change representation. The experiment shows the effectiveness of our method in each step. Finally, in an interesting statistic, we discover the strong positive correlation of frequency and changing trend between monosyllabic word sense and the corresponding morpheme. Full article
Show Figures

Figure 1

16 pages, 936 KB  
Article
Development of New Open-Set Speech Material for Use in Clinical Audiology with Speakers of British English
by Mahmoud Keshavarzi, Marina Salorio-Corbetto, Tobias Reichenbach, Josephine Marriage and Brian C. J. Moore
Audiol. Res. 2024, 14(2), 264-279; https://doi.org/10.3390/audiolres14020024 - 26 Feb 2024
Viewed by 2263
Abstract
Background: The Chear open-set performance test (COPT), which uses a carrier phrase followed by a monosyllabic test word, is intended for clinical assessment of speech recognition, evaluation of hearing-device performance, and the fine-tuning of hearing devices for speakers of British English. This paper [...] Read more.
Background: The Chear open-set performance test (COPT), which uses a carrier phrase followed by a monosyllabic test word, is intended for clinical assessment of speech recognition, evaluation of hearing-device performance, and the fine-tuning of hearing devices for speakers of British English. This paper assesses practice effects, test–retest reliability, and the variability across lists of the COPT. Method: In experiment 1, 16 normal-hearing participants were tested using an initial version of the COPT, at three speech-to-noise ratios (SNRs). Experiment 2 used revised COPT lists, with items swapped between lists to reduce differences in difficulty across lists. In experiment 3, test–retest repeatability was assessed for stimuli presented in quiet, using 15 participants with sensorineural hearing loss. Results: After administration of a single practice list, no practice effects were evident. The critical difference between scores for two lists was about 2 words (out of 15) or 5 phonemes (out of 50). The mean estimated SNR required for 74% words correct was −0.56 dB, with a standard deviation across lists of 0.16 dB. For the participants with hearing loss tested in quiet, the critical difference between scores for two lists was about 3 words (out of 15) or 6 phonemes (out of 50). Full article
(This article belongs to the Special Issue Rehabilitation of Hearing Impairment: 2nd Edition)
Show Figures

Figure 1

12 pages, 4108 KB  
Article
Investigation of Maximum Monosyllabic Word Recognition as a Predictor of Speech Understanding with Cochlear Implant
by Ronja Czurda, Thomas Wesarg, Antje Aschendorff, Rainer Linus Beck, Thomas Hocke, Manuel Christoph Ketterer and Susan Arndt
J. Clin. Med. 2024, 13(3), 646; https://doi.org/10.3390/jcm13030646 - 23 Jan 2024
Cited by 4 | Viewed by 1544
Abstract
Background: The cochlear implant (CI) is an established treatment option for patients with inadequate speech understanding and insufficient aided scores. Nevertheless, reliable predictive models and specific therapy goals regarding achievable speech understanding are still lacking. Method: In this retrospective study, 601 cases of [...] Read more.
Background: The cochlear implant (CI) is an established treatment option for patients with inadequate speech understanding and insufficient aided scores. Nevertheless, reliable predictive models and specific therapy goals regarding achievable speech understanding are still lacking. Method: In this retrospective study, 601 cases of CI fittings between 2005 and 2021 at the University Medical Center Freiburg were analyzed. We investigated the preoperative unaided maximum word recognition score (mWRS) as a minimum predictor for post-interventional scores at 65 dB SPL, WRS65(CI). The WRS65(CI) was compared with the preoperative-aided WRS, and a previously published prediction model for the WRS65(CI) was reviewed. Furthermore, the effect of duration of hearing loss, duration of HA fitting, and etiology on WRS65(CI) were investigated. Results: In 95.5% of the cases, a significant improvement in word recognition was observed after CI. WRS65(CI) achieved or exceeded mWRS in 97% of cases. Etiology had a significant impact on WRS65(CI). The predicted score was missed by more than 20 percentage points in 12.8% of cases. Discussion: Our results confirmed the minimum prediction via mWRS. A more precise prediction of the expected WRS65(CI) is possible. The etiology of hearing loss should be considered in the indication and postoperative care to achieve optimal results. Full article
Show Figures

Figure 1

11 pages, 1362 KB  
Article
Cue Weighting in Perception of the Retroflex and Non-Retroflex Laterals in the Zibo Dialect of Chinese
by Bing Dong, Jie Liang and Chang Liu
Behav. Sci. 2023, 13(6), 469; https://doi.org/10.3390/bs13060469 - 4 Jun 2023
Viewed by 1694
Abstract
This study investigated cue weighting in the perception of the retroflex and non-retroflex lateral contrast in the monosyllabic words /ɭə/ and /lə/ in the Zibo dialect of Chinese. A binary forced-choice identification task was carried out among 32 natives, using computer-modified natural speech [...] Read more.
This study investigated cue weighting in the perception of the retroflex and non-retroflex lateral contrast in the monosyllabic words /ɭə/ and /lə/ in the Zibo dialect of Chinese. A binary forced-choice identification task was carried out among 32 natives, using computer-modified natural speech situated in a two-dimensional acoustic space. The results showed that both acoustic cues had a significant main effect on lateral identification, with F1 of the following schwa being the primary cue and the consonant-tos-vowel (C/V) duration ratio as a secondary cue. No interaction effect was found between these two acoustic cues. Moreover, the results indicated that acoustic cues were not equally weighted in production and perception of the syllables /ɭə/ and /lə/ in the Zibo dialect. Future studies are suggested involving other acoustic cues (e.g., the F1 of laterals) or adding noise in the identification task to better understand listeners’ listening strategies in their perception of the two laterals in the Zibo dialect. Full article
(This article belongs to the Section Cognition)
Show Figures

Figure 1

30 pages, 3143 KB  
Article
A Phonological Study of Rongpa Choyul
by Jingyao Zheng
Languages 2023, 8(2), 133; https://doi.org/10.3390/languages8020133 - 26 May 2023
Cited by 1 | Viewed by 2469
Abstract
This paper presents a detailed description of the phonology of the Rongpa variety of Choyul, an understudied Tibeto-Burman language spoken in Lithang (理塘) County, Dkarmdzes (甘孜) Tibetan Autonomous Prefecture of Sichuan Province, China. Based on firsthand fieldwork data, this paper lays out Rongpa [...] Read more.
This paper presents a detailed description of the phonology of the Rongpa variety of Choyul, an understudied Tibeto-Burman language spoken in Lithang (理塘) County, Dkarmdzes (甘孜) Tibetan Autonomous Prefecture of Sichuan Province, China. Based on firsthand fieldwork data, this paper lays out Rongpa phonology with details, examining its syllable canon, initial and rhyme systems, and word prosody. Peculiar characteristics of this phonological system are as follows: First, Rongpa has a substantial phonemic inventory, which comprises 43 consonants, 13 vowels, and 2 tones. 84 consonant clusters are observed to serve as the initial of a syllable. Secondly, the phonemic contrast between plain and uvularized vowels is attested. In addition, regressive vowel harmony on uvularization, height, and lip-roundedness can be clearly observed in various constructions including prefixed verb stems. Finally, regarding word prosody, two tones in monosyllabic words, /H/ and /L/, are observed to distinguish lexical meanings, and disyllabic words exhibit four surface pitch patterns. Pitch patterns in verb morphology are also examined. The findings and analyses as presented in this paper could form a foundation for future research on Rongpa Choyul. Full article
(This article belongs to the Special Issue New Directions for Sino-Tibetan Linguistics in the Mid-21st Century)
Show Figures

Figure 1

19 pages, 2766 KB  
Article
Extended Preoperative Audiometry for Outcome Prediction and Risk Analysis in Patients Receiving Cochlear Implants
by Jan-Henrik Rieck, Annika Beyer, Alexander Mewes, Amke Caliebe and Matthias Hey
J. Clin. Med. 2023, 12(9), 3262; https://doi.org/10.3390/jcm12093262 - 3 May 2023
Cited by 14 | Viewed by 3231
Abstract
Background: The outcome of cochlear implantation has improved over the last decades, but there are still patients with less benefit. Despite numerous studies examining the cochlear implant (CI) outcome, variations in speech comprehension with CI remains incompletely explained. The aim of this study [...] Read more.
Background: The outcome of cochlear implantation has improved over the last decades, but there are still patients with less benefit. Despite numerous studies examining the cochlear implant (CI) outcome, variations in speech comprehension with CI remains incompletely explained. The aim of this study was therefore to examine preoperative pure-tone audiogram and speech comprehension as well as aetiology, to investigate their relationship with postoperative speech comprehension in CI recipients. Methods: A retrospective study with 664 ears of 530 adult patients was conducted. Correlations between the target variable postoperative word comprehension with the preoperative speech and sound comprehension as well as aetiology were investigated. Significant correlations were inserted into multivariate models. Speech comprehension measured as word recognition score at 70 dB with CI was analyzed as (i) a continuous and (ii) a dichotomous variable. Results: All variables that tested preoperative hearing were significantly correlated with the dichotomous target; with the continuous target, all except word comprehension at 65 dB with hearing aid. The strongest correlation with postoperative speech comprehension was seen for monosyllabic words with hearing aid at 80 dB. The preoperative maximum word comprehension was reached or surpassed by 97.3% of CI patients. Meningitis and congenital diseases were strongly negatively associated with postoperative word comprehension. The multivariate model was able to explain 40% of postoperative variability. Conclusion: Speech comprehension with hearing aid at 80 dB can be used as a supplementary preoperative indicator of CI-aided speech comprehension and should be measured regularly in the clinical routine. Combining audiological and aetiological variables provides more insights into the variability of the CI outcome, allowing for better patient counselling. Full article
Show Figures

Figure 1

11 pages, 633 KB  
Article
Probing the Impact of Prematurity on Segmentation Abilities in the Context of Bilingualism
by Elena Berdasco-Muñoz, Valérie Biran and Thierry Nazzi
Brain Sci. 2023, 13(4), 568; https://doi.org/10.3390/brainsci13040568 - 28 Mar 2023
Viewed by 1958
Abstract
Infants born prematurely are at a high risk of developing linguistic deficits. In the current study, we compare how full-term and healthy preterm infants without neuro-sensorial impairments segment words from fluent speech, an ability crucial for lexical acquisition. While early word segmentation abilities [...] Read more.
Infants born prematurely are at a high risk of developing linguistic deficits. In the current study, we compare how full-term and healthy preterm infants without neuro-sensorial impairments segment words from fluent speech, an ability crucial for lexical acquisition. While early word segmentation abilities have been found in monolingual infants, we test here whether it is also the case for French-dominant bilingual infants with varying non-dominant languages. These bilingual infants were tested on their ability to segment monosyllabic French words from French sentences at 6 months of (postnatal) age, an age at which both full-term and preterm monolinguals are able to segment these words. Our results establish the existence of segmentation skills in these infants, with no significant difference in performance between the two maturation groups. Correlation analyses failed to find effects of gestational age in the preterm group, as well as effects of the language dominance within the bilingual groups. These findings indicate that monosyllabic word segmentation, which has been found to emerge by 4 months in monolingual French-learning infants, is a robust ability acquired at an early age even in the context of bilingualism and prematurity. Future studies should further probe segmentation abilities in more extreme conditions, such as in bilinguals tested in their non-dominant language, in preterm infants with medical issues, or testing the segmentation of more complex word structures. Full article
(This article belongs to the Special Issue Neurodevelopmental Disorders and Early Language Acquisition)
Show Figures

Figure 1

Back to TopTop