Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (188)

Search Parameters:
Keywords = syllable

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1589 KB  
Article
Articulatory Control by Gestural Coupling and Syllable Pulses
by Christopher Geissler
Languages 2025, 10(9), 219; https://doi.org/10.3390/languages10090219 - 29 Aug 2025
Viewed by 234
Abstract
Explaining the relative timing of consonant and vowel articulations (C-V timing) is an important function of speech production models. This article explores how C-V timing might be studied from the perspective of the C/D Model, particularly the prediction that articulations are coordinated with [...] Read more.
Explaining the relative timing of consonant and vowel articulations (C-V timing) is an important function of speech production models. This article explores how C-V timing might be studied from the perspective of the C/D Model, particularly the prediction that articulations are coordinated with respect to an abstract syllable pulse. Gestural landmarks were extracted from kinematic data from English CVC monosyllabic words in the Wisconsin X-Ray Microbeam Corpus. The syllable pulse was identified using velocity peaks, and temporal lags were calculated among landmarks and the syllable pulse. The results directly follow from the procedure used to identify pulses: onset consonants exhibited stable timing to the pulse, while vowel-to-pulse timing was comparably stable with respect to C-V timing. Timing relationships with jaw displacement and jaw-based syllable pulse metrics were also explored. These results highlight current challenges for the C/D Model, as well as opportunities for elaborating the model to account for C-V timing. Full article
(This article belongs to the Special Issue Research on Articulation and Prosodic Structure)
Show Figures

Figure 1

25 pages, 1403 KB  
Protocol
Discrimination and Integration of Phonological Features in Children with Autism Spectrum Disorder: An Exploratory Multi-Feature Oddball Protocol
by Mingyue Zuo, Yang Zhang, Rui Wang, Dan Huang, Luodi Yu and Suiping Wang
Brain Sci. 2025, 15(9), 905; https://doi.org/10.3390/brainsci15090905 - 23 Aug 2025
Viewed by 415
Abstract
Background/Objectives: Children with Autism Spectrum Disorder (ASD) often display heightened sensitivity to simple auditory stimuli, but have difficulty discriminating and integrating multiple phonological features (segmental: consonants and vowels; suprasegmental: lexical tones) at the syllable level, which negatively impacts their communication. This study aims [...] Read more.
Background/Objectives: Children with Autism Spectrum Disorder (ASD) often display heightened sensitivity to simple auditory stimuli, but have difficulty discriminating and integrating multiple phonological features (segmental: consonants and vowels; suprasegmental: lexical tones) at the syllable level, which negatively impacts their communication. This study aims to investigate the neural basis of segmental, suprasegmental and combinatorial speech processing challenges in Mandarin-speaking children with ASD compared with typically developing (TD) peers. Methods: Thirty children with ASD and thirty TD peers will complete a multi-feature oddball paradigm to elicit auditory ERP during passive listening. Stimuli include syllables with single (e.g., vowel only), dual (e.g., vowel + tone), and triple (consonant + vowel + tone) phonological deviations. Neural responses will be analyzed using temporal principal component analysis (t-PCA) to isolate overlapping ERP components (early/late MMN), and representational similarity analysis (RSA) to assess group differences in neural representational structure across feature conditions. Expected Outcomes: We adopt a dual-framework approach to hypothesis generation. First, from a theory-driven perspective, we integrate three complementary models, Enhanced Perceptual Functioning (EPF), Weak Central Coherence (WCC), and the Neural Complexity Hypothesis (NCH), to account for auditory processing in ASD. Specifically, we hypothesize that ASD children will show enhanced or intact neural discriminatory responses to isolated segmental deviations (e.g., vowel), but attenuated or delayed responses to suprasegmental (e.g., tone) and multi-feature deviants, with the most severe disruptions occurring in complex, multi-feature conditions. Second, from an empirically grounded, data-driven perspective, we derive our central hypothesis directly from the mismatch negativity (MMN) literature, which suggests reduced MMN amplitudes (with the exception of vowel deviants) and prolonged latencies accompanied by a diminished left-hemisphere advantage across all speech feature types in ASD, with the most pronounced effects in complex, multi-feature conditions. Significance: By testing alternative hypotheses and predictions, this exploratory study will clarify the extent to which speech processing differences in ASD reflect cognitive biases (local vs. global, per EPF/WCC/NCH) versus speech-specific neurophysiological disruptions. Findings will advance our understanding of the sensory and integrative mechanisms underlying communication difficulties in ASD, particularly in tonal language contexts, and may inform the development of linguistically tailored interventions. Full article
(This article belongs to the Special Issue Language Perception and Processing)
Show Figures

Figure 1

20 pages, 1064 KB  
Article
Very Young Children Learning German Notice the Incorrect Syllable Stress of Words
by Ulrike Schild and Claudia Katrin Friedrich
Languages 2025, 10(8), 197; https://doi.org/10.3390/languages10080197 - 18 Aug 2025
Viewed by 407
Abstract
Syllable stress can help to quickly identify words in a language with variable stress placement like German. Here, we asked at what age incorrect syllable stress impairs language learners’ attempts to assign meaning to familiar words. We recorded the looking times of young [...] Read more.
Syllable stress can help to quickly identify words in a language with variable stress placement like German. Here, we asked at what age incorrect syllable stress impairs language learners’ attempts to assign meaning to familiar words. We recorded the looking times of young children learning German aged from 4 to 15 months (infants, N69) and 2 to 4 years (toddlers, N28). Participants saw displays of two pictures (e.g., a car and a baby); one of both objects (the target) was named. The disyllabic name of the target was either correctly stressed on the first syllable (“BA.by”) or it was incorrectly stressed on the second syllable (“ba.BY”). On average, all children looked more at the target when they heard its correctly stressed name (compared to the incorrectly stressed name). Furthermore, the analyses of growth curves for all children showed a steeper increase in looking time at the target picture when children heard the correctly stressed target’s name compared to the incorrectly stressed name. These results thus suggest that even very young German-learning children use syllable stress for incremental word-meaning mapping. However, separate post hoc analyses revealed robust differences in overall target fixations only in toddlers but not in infants. The stronger effects in toddlers compared to infants could be related either to the growing vocabulary or the increasing sensitivity to word stress with increasing age. Full article
(This article belongs to the Special Issue Advances in the Acquisition of Prosody)
Show Figures

Figure 1

17 pages, 927 KB  
Article
The Influence of Teaching Songs with Text and a Neutral Syllable on 4-to-9-Year-Old Portuguese Children’s Vocal Performance
by Ana Isabel Pereira and Helena Rodrigues
Educ. Sci. 2025, 15(8), 984; https://doi.org/10.3390/educsci15080984 - 1 Aug 2025
Viewed by 323
Abstract
Research on children’s singing development is extensive. Different ages, approaches, and variables have been taken into consideration. However, research on singing with text or a neutral syllable is scarce, and findings are inconclusive. This study investigated the influence of singing with text and [...] Read more.
Research on children’s singing development is extensive. Different ages, approaches, and variables have been taken into consideration. However, research on singing with text or a neutral syllable is scarce, and findings are inconclusive. This study investigated the influence of singing with text and a neutral syllable on children’s vocal performance. Children aged 4 to 9 (n = 135) participated in two periods of instruction and assessment. In Period One, Song 1 was taught with text and Song 2 with a neutral syllable, and in Period Two, the text was added to Song 2. In each period, children were individually audio-recorded singing both songs. Three independent raters scored the songs’ vocal performances using two researcher-designed rating scales, one for each song, which included the assessment of tonal and rhythm dimensions. Before data analysis, the validity and reliability of the rating scales used to assess vocal performance were examined and assured. The results revealed that 4-, 5-, and 7-year-olds sang Song 1 significantly better in Period One, and 4- and 5-year-olds sang Song 1 significantly better in Period Two. Thus, singing with text seems to favour younger children’s vocal performance. Findings also revealed that girls scored significantly higher than boys for Song 1 in both periods, but not for Song 2 in Period One. The implications of incorporating songs with text and neutral syllables into music programs, as well as the instruments used to assess vocal performances, are discussed. Full article
(This article belongs to the Special Issue Contemporary Issues in Music Education: International Perspectives)
Show Figures

Figure 1

16 pages, 2283 KB  
Article
Recognition of Japanese Finger-Spelled Characters Based on Finger Angle Features and Their Continuous Motion Analysis
by Tamon Kondo, Ryota Murai, Zixun He, Duk Shin and Yousun Kang
Electronics 2025, 14(15), 3052; https://doi.org/10.3390/electronics14153052 - 30 Jul 2025
Viewed by 322
Abstract
To improve the accuracy of Japanese finger-spelled character recognition using an RGB camera, we focused on feature design and refinement of the recognition method. By leveraging angular features extracted via MediaPipe, we proposed a method that effectively captures subtle motion differences while minimizing [...] Read more.
To improve the accuracy of Japanese finger-spelled character recognition using an RGB camera, we focused on feature design and refinement of the recognition method. By leveraging angular features extracted via MediaPipe, we proposed a method that effectively captures subtle motion differences while minimizing the influence of background and surrounding individuals. We constructed a large-scale dataset that includes not only the basic 50 Japanese syllables but also those with diacritical marks, such as voiced sounds (e.g., “ga”, “za”, “da”) and semi-voiced sounds (e.g., “pa”, “pi”, “pu”), to enhance the model’s ability to recognize a wide variety of characters. In addition, the application of a change-point detection algorithm enabled accurate segmentation of sign language motion boundaries, improving word-level recognition performance. These efforts laid the foundation for a highly practical recognition system. However, several challenges remain, including the limited size and diversity of the dataset and the need for further improvements in segmentation accuracy. Future work will focus on enhancing the model’s generalizability by collecting more diverse data from a broader range of participants and incorporating segmentation methods that consider contextual information. Ultimately, the outcomes of this research should contribute to the development of educational support tools and sign language interpretation systems aimed at real-world applications. Full article
Show Figures

Figure 1

17 pages, 919 KB  
Article
Timing of Intervals Between Utterances in Typically Developing Infants and Infants Later Diagnosed with Autism Spectrum Disorder
by Zahra Poursoroush, Gordon Ramsay, Ching-Chi Yang, Eugene H. Buder, Edina R. Bene, Pumpki Lei Su, Hyunjoo Yoo, Helen L. Long, Cheryl Klaiman, Moira L. Pileggi, Natalie Brane and D. Kimbrough Oller
Brain Sci. 2025, 15(8), 819; https://doi.org/10.3390/brainsci15080819 - 30 Jul 2025
Viewed by 407
Abstract
Background: Understanding the origin and natural organization of early infant vocalizations is important for predicting communication and language abilities in later years. The very frequent production of speech-like vocalizations (hereafter “protophones”), occurring largely independently of interaction, is part of this developmental process. Objectives: [...] Read more.
Background: Understanding the origin and natural organization of early infant vocalizations is important for predicting communication and language abilities in later years. The very frequent production of speech-like vocalizations (hereafter “protophones”), occurring largely independently of interaction, is part of this developmental process. Objectives: This study aims to investigate the gap durations (time intervals) between protophones, comparing typically developing (TD) infants and infants later diagnosed with autism spectrum disorder (ASD) in a naturalistic setting where endogenous protophones occur frequently. Additionally, we explore potential age-related variations and sex differences in gap durations. Methods: We analyzed ~1500 five min recording segments from longitudinal all-day home recordings of 147 infants (103 TD infants and 44 autistic infants) during their first year of life. The data included over 90,000 infant protophones. Human coding was employed to ensure maximally accurate timing data. This method included the human judgment of gap durations specified based on time-domain and spectrographic displays. Results and Conclusions: Short gap durations occurred between protophones produced by infants, with a mode between 301 and 400 ms, roughly the length of an infant syllable, across all diagnoses, sex, and age groups. However, we found significant differences in the gap duration distributions between ASD and TD groups when infant-directed speech (IDS) was relatively frequent, as well as across age groups and sexes. The Generalized Linear Modeling (GLM) results confirmed these findings and revealed longer gap durations associated with higher IDS, female sex, older age, and TD diagnosis. Age-related differences and sex differences were highly significant for both diagnosis groups. Full article
Show Figures

Figure 1

23 pages, 4184 KB  
Article
Game on: Computerized Training Promotes Second Language Stress–Suffix Associations
by Kaylee Fernandez and Nuria Sagarra
Languages 2025, 10(7), 170; https://doi.org/10.3390/languages10070170 - 16 Jul 2025
Cited by 1 | Viewed by 418
Abstract
Effective language processing relies on pattern detection. Spanish monolinguals predict verb tense through stress–suffix associations: a stressed first syllable signals present tense, while an unstressed first syllable signals past tense. Low-proficiency second language (L2) Spanish learners struggle to detect these associations, and we [...] Read more.
Effective language processing relies on pattern detection. Spanish monolinguals predict verb tense through stress–suffix associations: a stressed first syllable signals present tense, while an unstressed first syllable signals past tense. Low-proficiency second language (L2) Spanish learners struggle to detect these associations, and we investigated whether they benefit from game-based training. We examined the effects of four variables on their ability to detect stress–suffix associations: three linguistic variables—verbs’ lexical stress (oxytones/paroxytones), first-syllable structure (consonant–vowel, CV/consonant–vowel–consonant, CVC), and phonotactic probability—and one learner variable—working memory (WM) span. Beginner English learners of Spanish played a digital game focused on stress–suffix associations for 10 days and completed a Spanish proficiency test (Lextale-Esp), a Spanish background and use questionnaire, and a Corsi WM task. The results revealed moderate gains in the acquisition of stress–suffix associations. Accuracy gains were observed for CV verbs and oxytones, and overall reaction times (RTs) decreased with gameplay. Higher-WM learners were more accurate and slower than lower-WM learners in all verb-type conditions. Our findings suggest that prosody influences word activation and that digital gaming can help learners attend to L2 inflectional morphology. Full article
Show Figures

Figure 1

24 pages, 3281 KB  
Article
A Quantitative and Qualitative Analysis of the Phonetic and Phonological Development of Children with Cochlear Implants and Its Relationship with Early Literacy
by Marinella Majorano, Michela Santangelo, Irene Redondi, Chiara Barachetti, Letizia Guerzoni and Domenico Cuda
Audiol. Res. 2025, 15(4), 81; https://doi.org/10.3390/audiolres15040081 - 3 Jul 2025
Viewed by 679
Abstract
Background/Objectives: During the transition to primary school, children with cochlear implants (CIs) may show language and early literacy fragilities. This study has three aims. First, it compares the phonetic and phonological skills of preschoolers with CIs and those with normal hearing (NH); [...] Read more.
Background/Objectives: During the transition to primary school, children with cochlear implants (CIs) may show language and early literacy fragilities. This study has three aims. First, it compares the phonetic and phonological skills of preschoolers with CIs and those with normal hearing (NH); second, it investigates the correlation between phonetic/phonological and emergent literacy skills in the two groups; third, it explores the relationship between phonetic/phonological skills and age at implantation in preschoolers with CIs. Methods: Sixteen children with CIs (Mage = 61 months; SD = 6.50) and twenty children with NH (Mage = 64 months; SD = 4.30) participated in the study. Phonetic and phonological skills (phonetic inventories and phonological processes) and early literacy skills (phonological awareness and print knowledge) were assessed. Group differences and relationships between the variables of interest were considered in the two groups. Results: A qualitative analysis of phonetic and phonological development showed differences between the two groups. There were also significant differences in early literacy skills (e.g., in syllable segmentation). Significant correlations emerged in both groups between phonetic/phonological skills and early literacy, although in different variables. Significant correlations were also found between age at implantation and the phonetic inventory in children with CIs. Conclusions: Preschoolers with CIs display more delays in the phonetic and phonological production skills and more emergent literacy fragilities than NH peers. However, print knowledge did not differ significantly between the groups. Early implantation supports the phonetic skills associated with subsequent literacy learning. Full article
Show Figures

Figure 1

15 pages, 1545 KB  
Article
Speech Recognition in Noise: Analyzing Phoneme, Syllable, and Word-Based Scoring Methods and Their Interaction with Hearing Loss
by Saransh Jain, Vijaya Kumar Narne, Bharani, Hema Valayutham, Thejaswini Madan, Sunil Kumar Ravi and Chandni Jain
Diagnostics 2025, 15(13), 1619; https://doi.org/10.3390/diagnostics15131619 - 26 Jun 2025
Viewed by 665
Abstract
Introduction: This study aimed to compare different scoring methods, such as phoneme, syllable, and word-based scoring, during word recognition in noise testing and their interaction with hearing loss severity. These scoring methods provided a structured framework for refining clinical audiological diagnosis by revealing [...] Read more.
Introduction: This study aimed to compare different scoring methods, such as phoneme, syllable, and word-based scoring, during word recognition in noise testing and their interaction with hearing loss severity. These scoring methods provided a structured framework for refining clinical audiological diagnosis by revealing underlying auditory processing at multiple linguistic levels. We highlight how scoring differences inform differential diagnosis and guide targeted audiological interventions. Methods: Pure tone audiometry and word-in-noise testing were conducted on 100 subjects with a wide range of hearing loss severity. Speech recognition was scored using phoneme, syllable, and word-based methods. All procedures were designed to reflect standard diagnostic protocols in clinical audiology. Discriminant function analysis examined how these scoring methods differentiate the degree of hearing loss. Results: Results showed that each method provides unique information about auditory processing. Phoneme-based scoring has pointed out basic auditory discrimination; syllable-based scoring can capture temporal and phonological processing, while word-based scoring reflects real-world listening conditions by incorporating contextual knowledge. These findings emphasize the diagnostic value of each scoring approach in clinical settings, aiding differential diagnosis and treatment planning. Conclusions: This study showed the effect of different scoring methods on hearing loss differentiation concerning severity. We recommend the integration of phoneme-based scoring into standard diagnostic batteries to enhance early detection and personalize rehabilitation strategies. Future research must involve studies about integration with other speech perception tests and applicability across different clinical settings. Full article
Show Figures

Figure 1

17 pages, 666 KB  
Article
English-Learning Infants’ Developing Sensitivity to Intonation Contours
by Megha Sundara and Sónia Frota
Languages 2025, 10(7), 148; https://doi.org/10.3390/languages10070148 - 20 Jun 2025
Viewed by 523
Abstract
In four experiments, we investigated when and how English-learning infants perceive intonation contours that signal prosodic units. Using visual habituation, we probed infants’ ability to discriminate disyllabic sequences with a fall versus a rise in pitch on the final syllable, a salient cue [...] Read more.
In four experiments, we investigated when and how English-learning infants perceive intonation contours that signal prosodic units. Using visual habituation, we probed infants’ ability to discriminate disyllabic sequences with a fall versus a rise in pitch on the final syllable, a salient cue used to distinguish statements from questions. First, we showed that at 8 months, English-learning infants can distinguish statement falls from question rises, as has been reported previously for their European Portuguese-learning peers who have extensive experience with minimal pairs that differ just in pitch rises and falls. Next, we conducted three experiments involving 4-month-olds to determine the developmental roots of how English-learning infants begin to tune into these intonation contours. In Experiment 2, we showed that unlike 8-month-olds, monolingual English-learning 4-month-olds are unable to distinguish statement and question intonation when they are presented with segmentally varied disyllabic sequences. Monolingual English-learning 4-month-olds only partially succeeded even when tested without segmental variability and a sensitive testing procedure (Experiment 3). When tested with stimuli that had been resynthesized to remove correlated duration cues as well, 4-month-olds demonstrated only partial success (Experiment 4). We discuss our results in the context of extant developmental research on how infants tune into linguistically relevant pitch cues in their first year of life. Full article
(This article belongs to the Special Issue Advances in the Acquisition of Prosody)
Show Figures

Figure 1

15 pages, 1134 KB  
Article
Is the Prosodic Structure of Texts Reflected in Silent Reading? An Eye-Tracking Corpus Analysis
by Marijan Palmović and Kristina Cergol
J. Eye Mov. Res. 2025, 18(3), 24; https://doi.org/10.3390/jemr18030024 - 18 Jun 2025
Viewed by 465
Abstract
The aim of this study was to test the Implicit Prosody Hypothesis using a reading corpus, i.e., a text without experimental manipulation labelled with eye-tracking parameters. For this purpose, a bilingual Croatian–English reading corpus was analysed. In prosodic terms, Croatian and English are [...] Read more.
The aim of this study was to test the Implicit Prosody Hypothesis using a reading corpus, i.e., a text without experimental manipulation labelled with eye-tracking parameters. For this purpose, a bilingual Croatian–English reading corpus was analysed. In prosodic terms, Croatian and English are at the opposite ends of the spectrum: English is considered a time-framed language, while Croatian is a syllable-framed language. This difference served as a kind of experimental control in this study on natural reading. The results show that readers’ eyes lingered more on stressed syllables than on the arrangement of stressed and unstressed syllables for both languages. This is especially pronounced for English, a language with greater differences in the duration of stressed and unstressed syllables. This study provides indirect evidence in favour of the Implicit Prosody Hypothesis, i.e., the idea that readers are guided by their inner voice with its suprasegmental features when reading silently. The differences between the languages can be traced back to the typological differences in stress in English and Croatian. Full article
Show Figures

Figure 1

15 pages, 2337 KB  
Article
Is It About Speech or About Prediction? Testing Between Two Accounts of the Rhythm–Reading Link
by Susana Silva, Ana Rita Batista, Nathércia Lima Torres, José Sousa, Aikaterini Liapi, Styliani Bairami and Vasiliki Folia
Brain Sci. 2025, 15(6), 642; https://doi.org/10.3390/brainsci15060642 - 14 Jun 2025
Viewed by 852
Abstract
Background/Objectives: The mechanisms underlying the positive association between reading and rhythmic skills remain unclear. Our goal was to systematically test between two major explanations: the Temporal Sampling Framework (TSF), which highlights the relation between rhythm and speech encoding, and a competing explanation based [...] Read more.
Background/Objectives: The mechanisms underlying the positive association between reading and rhythmic skills remain unclear. Our goal was to systematically test between two major explanations: the Temporal Sampling Framework (TSF), which highlights the relation between rhythm and speech encoding, and a competing explanation based on rhythm’s role in enhancing prediction within visual and auditory sequences. Methods: We compared beat versus duration perception for their associations with encoding and sequence learning (prediction-related) tasks, using both visual and auditory sequences. We also compared these associations for Portuguese vs. Greek participants, since Portuguese stress-timed rhythm is more compatible with music-like beats lasting around 500 ms, in contrast to the syllable-timed rhythm of Greek. If rhythm acts via speech encoding, its effects should be more salient in Portuguese. Results: Consistent with the TSF’s predictions, we found a significant association between beat perception and auditory encoding in Portuguese but not in Greek participants. Correlations between time perception and sequence learning in both modalities were either null or insufficiently supported in both groups. Conclusions: Altogether, the evidence supported the TSF-related predictions in detriment of the Rhythm-as-Predictor (RaP) hypothesis. Full article
(This article belongs to the Section Neurolinguistics)
Show Figures

Figure 1

14 pages, 896 KB  
Article
Dita.te—A Dictation Assessment Instrument with Automatic Analysis
by Daniela Saraiva, Ana Margarida Ramalho, Ana Rita Valente, Cláudia Rocha and Marisa Lousada
Children 2025, 12(6), 774; https://doi.org/10.3390/children12060774 - 14 Jun 2025
Viewed by 2112
Abstract
Background/Objectives: To date, there are no validated tools that assess children’s performance in connected text dictation tasks in European Portuguese using automated analysis. International studies were identified, but these primarily involved word dictation tasks and did not use automatic scoring tools. The present [...] Read more.
Background/Objectives: To date, there are no validated tools that assess children’s performance in connected text dictation tasks in European Portuguese using automated analysis. International studies were identified, but these primarily involved word dictation tasks and did not use automatic scoring tools. The present study aims to assess the reliability of the Dita.te (internal consistency and inter-rater reliability), a written assessment test based on a dictation task with automatic spreadsheet analysis, and establish normative data for text dictation tasks for children from 3rd to 6th grade. Methods: This study included 315 European Portuguese-speaking children from the 3rd to 6th grades. The Dita.te tool was used to assess orthographic errors based on phonological, morphological, and prosodic criteria. Descriptive statistics, percentiles, the inter-rater reliability and internal consistency were analyzed. Non-parametric tests compared performance by gender and school year due to a non-normal data distribution. Results: The Dita.te had excellent internal consistency (α = 0.929). The correlation between items scored highly (Intraclass Correlation Coefficient = 0.925). The number of errors decreased as the school year progressed, with errors affecting the syllable nucleus being the most frequent across all school years. These were followed by orthographic substitution errors, with grapheme omission being the most prevalent. Conclusions: Our findings suggest that orthographic competence is mostly stable before the 3rd grade, and the mismatches found in children with typical development show residual error in their orthographic performance. Full article
Show Figures

Figure 1

24 pages, 1461 KB  
Article
Syllable-, Bigram-, and Morphology-Driven Pseudoword Generation in Greek
by Kosmas Kosmidis, Vassiliki Apostolouda and Anthi Revithiadou
Appl. Sci. 2025, 15(12), 6582; https://doi.org/10.3390/app15126582 - 11 Jun 2025
Viewed by 572
Abstract
Pseudowords are essential in (psycho)linguistic research, offering a way to study language without meaning interference. Various methods for creating pseudowords exist, but each has its limitations. Traditional approaches modify existing words, risking unintended recognition. Modern algorithmic methods use high-frequency n-grams or syllable [...] Read more.
Pseudowords are essential in (psycho)linguistic research, offering a way to study language without meaning interference. Various methods for creating pseudowords exist, but each has its limitations. Traditional approaches modify existing words, risking unintended recognition. Modern algorithmic methods use high-frequency n-grams or syllable deconstruction but often require specialized expertise. Currently, no automatic process for pseudoword generation is designed explicitly for Greek, which is our primary focus. Therefore, we developed SyBig-r-Morph, a novel application that constructs pseudowords using syllables as the main building block, replicating Greek phonotactic patterns. SyBig-r-Morph draws input from word lists and databases that include syllabification, word length, part of speech, and frequency information. It categorizes syllables by position to ensure phonotactic consistency with user-selected morphosyntactic categories and can optionally assign stress to generated words. Additionally, the tool uses multiple lexicons to eliminate phonologically invalid combinations. Its modular architecture allows easy adaptation to other languages. To further evaluate its output, we conducted a manual assessment using a tool that verifies phonotactic well-formedness based on phonological parameters derived from a corpus. Most SyBig-r-Morph words passed the stricter phonotactic criteria, confirming the tool’s sound design and linguistic adequacy. Full article
(This article belongs to the Special Issue Computational Linguistics: From Text to Speech Technologies)
Show Figures

Figure 1

33 pages, 3861 KB  
Article
The Importance of Being Onset: Tuscan Lenition and Stops in Coda Position
by Giuditta Avano and Piero Cossu
Languages 2025, 10(6), 129; https://doi.org/10.3390/languages10060129 - 30 May 2025
Viewed by 2686
Abstract
This paper examines Gorgia Toscana (GT), a phenomenon of stop lenition observed in Tuscan varieties of Italian. Traditionally, this process has been understood to occur in post-vocalic positions, which, in the native lexicon, corresponds to onset position due to the absence of stops [...] Read more.
This paper examines Gorgia Toscana (GT), a phenomenon of stop lenition observed in Tuscan varieties of Italian. Traditionally, this process has been understood to occur in post-vocalic positions, which, in the native lexicon, corresponds to onset position due to the absence of stops in syllable codas in Italian, apart from geminate consonants that straddle the coda and onset of adjacent syllables. However, stops in coda positions are found in both loanwords (e.g., admin, Batman) and bookwords (e.g., ritmo, tecnica). Drawing on original acoustic data collected from 42 native speakers of Florentine Italian, we investigated the realization of stops in such lexical items through allophonic classification and quantitative analysis. Our primary aim was to test the Onset Hypothesis, which posits that Gorgia exclusively affects stops in onset positions, implying that coda stops should not undergo lenition. Our findings support this hypothesis. We provide a phonological analysis within the frameworks of Strict CV and Coda Mirror, emphasizing the importance of syllable structure in understanding the manifestation of Gorgia Toscana, which we argue cannot be adequately captured solely by considering the linear order of segments. Full article
(This article belongs to the Special Issue Speech Variation in Contemporary Italian)
Show Figures

Figure 1

Back to TopTop