Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (382)

Search Parameters:
Keywords = computational linguistics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2814 KB  
Article
Research on Making Two Models Based on the Generative Linguistic Steganography for Securing Linguistic Steganographic Texts from Active Attacks
by Yingquan Chen, Qianmu Li, Xiaocong Wu and Zijian Ying
Symmetry 2025, 17(9), 1416; https://doi.org/10.3390/sym17091416 (registering DOI) - 1 Sep 2025
Abstract
Generative steganographic text covertly transmits hidden information through readable text that is unrelated to the message. Existing AI-based linguistic steganography primarily focuses on improving text quality to evade detection and therefore only addresses passive attacks. Active attacks, such as text tampering, can disrupt [...] Read more.
Generative steganographic text covertly transmits hidden information through readable text that is unrelated to the message. Existing AI-based linguistic steganography primarily focuses on improving text quality to evade detection and therefore only addresses passive attacks. Active attacks, such as text tampering, can disrupt the symmetry between encoding and decoding, which in turn prevents accurate extraction of hidden information. To investigate these threats, we construct two attack models: the in-domain synonym substitution attack (ISSA) and the out-of-domain random tampering attack (ODRTA), with ODRTA further divided into continuous (CODRTA) and discontinuous (DODRTA) types. To enhance robustness, we propose a proactive adaptive-clustering defense against ISSA, and, for CODRTA and DODRTA, a post-hoc repair mechanism based on context-oriented search and the determinism of text generation. Experimental results demonstrate that these mechanisms effectively counter all attack types and significantly improve the integrity and usability of hidden information. The main limitation of our approach is the relatively high computational cost of defending against ISSA. Future work will focus on improving efficiency and expanding practical applicability. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

14 pages, 657 KB  
Article
Pretrained Models Against Traditional Machine Learning for Detecting Fake Hadith
by Jawaher Alghamdi, Adeeb Albukhari and Thair Al-Dala’in
Electronics 2025, 14(17), 3484; https://doi.org/10.3390/electronics14173484 (registering DOI) - 31 Aug 2025
Abstract
The proliferation of fake news, particularly in sensitive domains like religious texts, necessitates robust authenticity verification methods. This study addresses the growing challenge of authenticating Hadith, where traditional methods relying on the analysis of the chain of narrators (Isnad) and the content (Matn) [...] Read more.
The proliferation of fake news, particularly in sensitive domains like religious texts, necessitates robust authenticity verification methods. This study addresses the growing challenge of authenticating Hadith, where traditional methods relying on the analysis of the chain of narrators (Isnad) and the content (Matn) are increasingly strained by the sheer volume in circulation. To combat this issue, machine learning (ML) and natural language processing (NLP) techniques, specifically through transfer learning, are explored to automate Hadith classification into Genuine and Fake categories. This study utilizes an imbalanced dataset of 8544 Hadiths, with 7008 authentic and 1536 fake Hadiths, to systematically investigate the collective impact of both linguistic and contextual features, particularly the chain of narrators (Isnad), on Hadith authentication. For the first time in this specialized domain, state-of-the-art pre-trained language models (PLMs) such as Multilingual BERT (mBERT), CamelBERT, and AraBERT are evaluated alongside classical algorithms like logistic regression (LR) and support vector machine (SVM) for Hadith authentication. Our best-performing model, AraBERT, achieved a 99.94% F1score when including the chain of narrators, demonstrating the profound effectiveness of contextual elements (Isnad) in significantly improving accuracy, providing novel insights into the indispensable role of computational methods in Hadith authentication and reinforcing traditional scholarly emphasis. This research represents a significant advancement in combating misinformation in this important field. Full article
Show Figures

Figure 1

28 pages, 1705 KB  
Article
Identifying Literary Microgenres and Writing Style Differences in Romanian Novels with ReaderBench and Large Language Models
by Aura Cristina Udrea, Stefan Ruseti, Vlad Pojoga, Stefan Baghiu, Andrei Terian and Mihai Dascalu
Future Internet 2025, 17(9), 397; https://doi.org/10.3390/fi17090397 (registering DOI) - 30 Aug 2025
Viewed by 45
Abstract
Recent developments in natural language processing, particularly large language models (LLMs), create new opportunities for literary analysis in underexplored languages like Romanian. This study investigates stylistic heterogeneity and genre blending in 175 late 19th- and early 20th-century Romanian novels, each classified by literary [...] Read more.
Recent developments in natural language processing, particularly large language models (LLMs), create new opportunities for literary analysis in underexplored languages like Romanian. This study investigates stylistic heterogeneity and genre blending in 175 late 19th- and early 20th-century Romanian novels, each classified by literary historians into one of 17 genres. Our findings reveal that most novels do not adhere to a single genre label but instead combine elements of multiple (micro)genres, challenging traditional single-label classification approaches. We employed a dual computational methodology combining an analysis with Romanian-tailored linguistic features with general-purpose LLMs. ReaderBench, a Romanian-specific framework, was utilized to extract surface, syntactic, semantic, and discourse features, capturing fine-grained linguistic patterns. Alternatively, we prompted two LLMs (Llama3.3 70B and DeepSeek-R1 70B) to predict genres at the paragraph level, leveraging their ability to detect contextual and thematic coherence across multiple narrative scales. Statistical analyses using Kruskal–Wallis and Mann–Whitney tests identified genre-defining features at both novel and chapter levels. The integration of these complementary approaches enhances microgenre detection beyond traditional classification capabilities. ReaderBench provides quantifiable linguistic evidence, while LLMs capture broader contextual patterns; together, they provide a multi-layered perspective on literary genre that reflects the complex and heterogeneous character of fictional texts. Our results argue that both language-specific and general-purpose computational tools can effectively detect stylistic diversity in Romanian fiction, opening new avenues for computational literary analysis in limited-resourced languages. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
26 pages, 389 KB  
Article
Integrating AI with Meta-Language: An Interdisciplinary Framework for Classifying Concepts in Mathematics and Computer Science
by Elena Kramer, Dan Lamberg, Mircea Georgescu and Miri Weiss Cohen
Information 2025, 16(9), 735; https://doi.org/10.3390/info16090735 - 26 Aug 2025
Viewed by 190
Abstract
Providing students with effective learning resources is essential for improving educational outcomes—especially in complex and conceptually diverse fields such as Mathematics and Computer Science. To better understand how these subjects are communicated, this study investigates the linguistic structures embedded in academic texts from [...] Read more.
Providing students with effective learning resources is essential for improving educational outcomes—especially in complex and conceptually diverse fields such as Mathematics and Computer Science. To better understand how these subjects are communicated, this study investigates the linguistic structures embedded in academic texts from selected subfields within both disciplines. In particular, we focus on meta-languages—the linguistic tools used to express definitions, axioms, intuitions, and heuristics within a discipline. The primary objective of this research is to identify which subfields of Mathematics and Computer Science share similar meta-languages. Identifying such correspondences may enable the rephrasing of content from less familiar subfields using styles that students already recognize from more familiar areas, thereby enhancing accessibility and comprehension. To pursue this aim, we compiled text corpora from multiple subfields across both disciplines. We compared their meta-languages using a combination of supervised (Neural Network) and unsupervised (clustering) learning methods. Specifically, we applied several clustering algorithms—K-means, Partitioning around Medoids (PAM), Density-Based Clustering, and Gaussian Mixture Models—to analyze inter-discipline similarities. To validate the resulting classifications, we used XLNet, a deep learning model known for its sensitivity to linguistic patterns. The model achieved an accuracy of 78% and an F1-score of 0.944. Our findings show that subfields can be meaningfully grouped based on meta-language similarity, offering valuable insights for tailoring educational content more effectively. To further verify these groupings and explore their pedagogical relevance, we conducted both quantitative and qualitative research involving student participation. This paper presents findings from the qualitative component—namely, a content analysis of semi-structured interviews with software engineering students and lecturers. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

24 pages, 1506 KB  
Article
LLM-Guided Weighted Contrastive Learning with Topic-Aware Masking for Efficient Domain Adaptation: A Case Study on Pulp-Era Science Fiction
by Sujin Kang
Electronics 2025, 14(17), 3351; https://doi.org/10.3390/electronics14173351 - 22 Aug 2025
Viewed by 315
Abstract
Domain adaptation of pre-trained language models remains challenging, especially for specialized text collections that include distinct vocabularies and unique semantic structures. Existing contrastive learning methods frequently rely on generic masking techniques and coarse-grained similarity measures, which limit their ability to capture fine-grained, domain-specific [...] Read more.
Domain adaptation of pre-trained language models remains challenging, especially for specialized text collections that include distinct vocabularies and unique semantic structures. Existing contrastive learning methods frequently rely on generic masking techniques and coarse-grained similarity measures, which limit their ability to capture fine-grained, domain-specific linguistic nuances. This paper proposes an enhanced domain adaptation framework by integrating weighted contrastive learning guided by large language model (LLM) feedback and a novel topic-aware masking strategy. Specifically, topic modeling is utilized to systematically identify semantically crucial domain-specific terms, enabling the creation of meaningful contrastive pairs through three targeted masking strategies: single-keyword, multiple-keyword, and partial-keyword masking. Each masked sentence undergoes LLM-guided reconstruction, accompanied by graduated similarity assessments that serve as continuous, fine-grained supervision signals. Experiments conducted on an early 20th-century science fiction corpus demonstrate that the proposed approach consistently outperforms existing baselines, such as SimCSE and DiffCSE, across multiple linguistic probing tasks within the newly introduced SF-ProbeEval benchmark. Furthermore, the proposed method achieves these performance improvements with significantly reduced computational requirements, highlighting its practical applicability for efficient and interpretable adaptation of language models to specialized domains. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

15 pages, 1506 KB  
Proceeding Paper
Artificial Intelligence for Historical Manuscripts Digitization: Leveraging the Lexicon of Cyril
by Stavros N. Moutsis, Despoina Ioakeimidou, Konstantinos A. Tsintotas, Konstantinos Evangelidis, Panagiotis E. Nastou and Antonis Tsolomitis
Eng. Proc. 2025, 107(1), 8; https://doi.org/10.3390/engproc2025107008 - 21 Aug 2025
Viewed by 294
Abstract
Artificial intelligence (AI) is a cutting-edge and revolutionary technology in computer science that has the potential to completely transform a wide range of disciplines, including the social sciences, the arts, and the humanities. Therefore, since its significance has been recognized in engineering and [...] Read more.
Artificial intelligence (AI) is a cutting-edge and revolutionary technology in computer science that has the potential to completely transform a wide range of disciplines, including the social sciences, the arts, and the humanities. Therefore, since its significance has been recognized in engineering and medicine, history, literature, paleography, and archaeology have recently embraced AI as new opportunities have arisen for preserving ancient manuscripts. Acknowledging the importance of digitizing archival documents, this paper explores the use of advanced technologies during this process, showing how these are employed at each stage and how the unique challenges inherent in past scripts are addressed. Our study is based on Cyril’s Lexicon, a Byzantine-era dictionary of great historical and linguistic significance in Greek territory. Full article
Show Figures

Figure 1

20 pages, 2833 KB  
Article
A Multi-Level Annotation Model for Fake News Detection: Implementing Kazakh-Russian Corpus via Label Studio
by Madina Sambetbayeva, Anargul Nekessova, Aigerim Yerimbetova, Abdygalym Bayangali, Mira Kaldarova, Duman Telman and Nurzhigit Smailov
Big Data Cogn. Comput. 2025, 9(8), 215; https://doi.org/10.3390/bdcc9080215 - 20 Aug 2025
Viewed by 447
Abstract
This paper presents a multi-level annotation model for detecting fake news in Kazakh and Russian languages, aiming to enhance understanding of disinformation strategies in multilingual digital media environments. Unlike traditional binary models, our approach captures the complexity of disinformation by accounting for both [...] Read more.
This paper presents a multi-level annotation model for detecting fake news in Kazakh and Russian languages, aiming to enhance understanding of disinformation strategies in multilingual digital media environments. Unlike traditional binary models, our approach captures the complexity of disinformation by accounting for both linguistic and cultural factors. To support this, a corpus of over 5000 news texts was manually annotated using the Label Studio platform. The annotation scheme consists of seven interrelated categories: CLAIM, SOURCE, EVIDENCE, DISINFORMATION_TECHNIQUE, AUTHOR_INTENT, TARGET_AUDIENCE, and TIMESTAMP. Inter-annotator agreement, evaluated using Cohen’s Kappa, ranged from 0.72 to 0.81, indicating substantial consistency. The annotated data reveals recurring patterns of disinformation, such as emotional manipulation, targeting of vulnerable individuals, and the strategic concealment of intent. Semantic relations between entities, such as CLAIM → EVIDENCE and CLAIM → AUTHOR_INTENT were formalized to represent disinformation narratives as knowledge graphs. This study contributes the first linguistically and culturally adapted annotation model for Kazakh and Russian languages, providing a robust and empirical resource for building interpretable and context-aware fake news detection systems. The resulting annotated corpus and its semantic structure offer valuable empirical material for further research in natural language processing, computational linguistics, and media studies in low-resource language environments. Full article
Show Figures

Figure 1

10 pages, 477 KB  
Article
Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies
by Yifan Zhang and Kuzma Strelnikov
Informatics 2025, 12(3), 83; https://doi.org/10.3390/informatics12030083 - 15 Aug 2025
Viewed by 434
Abstract
Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task [...] Read more.
Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task under varying levels of contextual cues (high, medium, and low) as defined using human performance, thereby enabling direct AI–human comparisons. Our findings indicate that LLMs significantly outperform humans, particularly in medium- and low-context conditions. While success in medium-context scenarios reflects the efficient utilization of contextual information, performance in low-context situations—where LLMs achieved approximately 25% accuracy compared to just 1% for humans—suggests that the models harness deep linguistic structures beyond mere surface context. This discovery implies that LLMs may elucidate previously unknown aspects of language architecture. The ability of LLMs to exploit deep structural regularities and statistical patterns in medium- and low-predictability contexts offers a novel perspective on the computational architecture of the human language system. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

24 pages, 1064 KB  
Article
Arabic Abstractive Text Summarization Using an Ant Colony System
by Amal M. Al-Numai and Aqil M. Azmi
Mathematics 2025, 13(16), 2613; https://doi.org/10.3390/math13162613 - 15 Aug 2025
Viewed by 459
Abstract
Arabic abstractive summarization presents a complex multi-objective optimization challenge, balancing readability, informativeness, and conciseness. While extractive approaches dominate NLP, abstractive methods—particularly for Arabic—remain underexplored due to linguistic complexity. This study introduces, for the first time, ant colony system (ACS) for Arabic abstractive summarization [...] Read more.
Arabic abstractive summarization presents a complex multi-objective optimization challenge, balancing readability, informativeness, and conciseness. While extractive approaches dominate NLP, abstractive methods—particularly for Arabic—remain underexplored due to linguistic complexity. This study introduces, for the first time, ant colony system (ACS) for Arabic abstractive summarization (named AASAC—Arabic Abstractive Summarization using Ant Colony), framing it as a combinatorial evolutionary optimization task. Our method integrates collocation and word-relation features into heuristic-guided fitness functions, simultaneously optimizing content coverage and linguistic coherence. Evaluations on a benchmark dataset using LemmaRouge, a lemma-based metric that evaluates semantic similarity rather than surface word forms, demonstrate consistent superiority. For 30% summaries, AASAC achieves 51.61% (LemmaRouge-1) and 46.82% (LemmaRouge-L), outperforming baselines by 13.23% and 20.49%, respectively. At 50% summary length, it reaches 64.56% (LemmaRouge-1) and 61.26% (LemmaRouge-L), surpassing baselines by 10.73% and 3.23%. These results highlight AASAC’s effectiveness in addressing multi-objective NLP challenges and establish its potential for evolutionary computation applications in language generation, particularly for complex morphological languages like Arabic. Full article
Show Figures

Figure 1

20 pages, 3244 KB  
Article
SOUTY: A Voice Identity-Preserving Mobile Application for Arabic-Speaking Amyotrophic Lateral Sclerosis Patients Using Eye-Tracking and Speech Synthesis
by Hessah A. Alsalamah, Leena Alhabrdi, May Alsebayel, Aljawhara Almisned, Deema Alhadlaq, Loody S. Albadrani, Seetah M. Alsalamah and Shada AlSalamah
Electronics 2025, 14(16), 3235; https://doi.org/10.3390/electronics14163235 - 14 Aug 2025
Viewed by 333
Abstract
Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disorder that progressively impairs motor and communication abilities. Globally, the prevalence of ALS was estimated at approximately 222,800 cases in 2015 and is projected to increase by nearly 70% to 376,700 cases by 2040, primarily driven [...] Read more.
Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disorder that progressively impairs motor and communication abilities. Globally, the prevalence of ALS was estimated at approximately 222,800 cases in 2015 and is projected to increase by nearly 70% to 376,700 cases by 2040, primarily driven by demographic shifts in aging populations, and the lifetime risk of developing ALS is 1 in 350–420. Despite international advancements in assistive technologies, a recent national survey in Saudi Arabia revealed that 100% of ALS care providers lack access to eye-tracking communication tools, and 92% reported communication aids as inconsistently available. While assistive technologies such as speech-generating devices and gaze-based control systems have made strides in recent decades, they primarily support English speakers, leaving Arabic-speaking ALS patients underserved. This paper presents SOUTY, a cost-effective, mobile-based application that empowers ALS patients to communicate using gaze-controlled interfaces combined with a text-to-speech (TTS) feature in Arabic language, which is one of the five most widely spoken languages in the world. SOUTY (i.e., “my voice”) utilizes a personalized, pre-recorded voice bank of the ALS patient and integrated eye-tracking technology to support the formation and vocalization of custom phrases in Arabic. This study describes the full development life cycle of SOUTY from conceptualization and requirements gathering to system architecture, implementation, evaluation, and refinement. Validation included expert interviews with Human–Computer Interaction (HCI) expertise and speech pathology specialty, as well as a public survey assessing awareness and technological readiness. The results support SOUTY as a culturally and linguistically relevant innovation that enhances autonomy and quality of life for Arabic-speaking ALS patients. This approach may serve as a replicable model for developing inclusive Augmentative and Alternative Communication (AAC) tools in other underrepresented languages. The system achieved 100% task completion during internal walkthroughs, with mean phrase selection times under 5 s and audio playback latency below 0.3 s. Full article
Show Figures

Figure 1

31 pages, 5187 KB  
Article
Investigation of ASR Models for Low-Resource Kazakh Child Speech: Corpus Development, Model Adaptation, and Evaluation
by Diana Rakhimova, Zhansaya Duisenbekkyzy and Eşref Adali
Appl. Sci. 2025, 15(16), 8989; https://doi.org/10.3390/app15168989 - 14 Aug 2025
Viewed by 351
Abstract
This study focuses on the development and evaluation of automatic speech recognition (ASR) systems for Kazakh child speech, an underexplored domain in both linguistic and computational research. A specialized acoustic corpus was constructed for children aged 2 to 8 years, incorporating age-related vocabulary [...] Read more.
This study focuses on the development and evaluation of automatic speech recognition (ASR) systems for Kazakh child speech, an underexplored domain in both linguistic and computational research. A specialized acoustic corpus was constructed for children aged 2 to 8 years, incorporating age-related vocabulary stratification and gender variation to capture phonetic and prosodic diversity. The data were collected from three sources: a custom-designed Telegram bot, high-quality Dictaphone recordings, and naturalistic speech samples recorded in home and preschool environments. Four ASR models, Whisper, DeepSpeech, ESPnet, and Vosk, were evaluated. Whisper, ESPnet, and DeepSpeech were fine-tuned on the curated corpus, while Vosk was applied in its standard pretrained configuration. Performance was measured using five evaluation metrics: Word Error Rate (WER), BLEU, Translation Edit Rate (TER), Character Similarity Rate (CSRF2), and Accuracy. The results indicate that ESPnet achieved the highest accuracy (32%) and the lowest WER (0.242) for sentences, while Whisper performed well in semantically rich utterances (Accuracy = 33%; WER = 0.416). Vosk demonstrated the best performance on short words (Accuracy = 68%) and yielded the highest BLEU score (0.600) for short words. DeepSpeech showed moderate improvements in accuracy, particularly for short words (Accuracy = 60%), but faced challenges with longer utterances, achieving an Accuracy of 25% for sentences. These findings emphasize the critical importance of age-appropriate corpora and domain-specific adaptation when developing ASR systems for low-resource child speech, particularly in educational and therapeutic contexts. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 1734 KB  
Article
A Multimodal Affective Interaction Architecture Integrating BERT-Based Semantic Understanding and VITS-Based Emotional Speech Synthesis
by Yanhong Yuan, Shuangsheng Duo, Xuming Tong and Yapeng Wang
Algorithms 2025, 18(8), 513; https://doi.org/10.3390/a18080513 - 14 Aug 2025
Viewed by 562
Abstract
Addressing the issues of coarse emotional representation, low cross-modal alignment efficiency, and insufficient real-time response capabilities in current human–computer emotional language interaction, this paper proposes an affective interaction framework integrating BERT-based semantic understanding with VITS-based speech synthesis. The framework aims to enhance the [...] Read more.
Addressing the issues of coarse emotional representation, low cross-modal alignment efficiency, and insufficient real-time response capabilities in current human–computer emotional language interaction, this paper proposes an affective interaction framework integrating BERT-based semantic understanding with VITS-based speech synthesis. The framework aims to enhance the naturalness, expressiveness, and response efficiency of human–computer emotional interaction. By introducing a modular layered design, a six-dimensional emotional space, a gated attention mechanism, and a dynamic model scheduling strategy, the system overcomes challenges such as limited emotional representation, modality misalignment, and high-latency responses. Experimental results demonstrate that the framework achieves superior performance in speech synthesis quality (MOS: 4.35), emotion recognition accuracy (91.6%), and response latency (<1.2 s), outperforming baseline models like Tacotron2 and FastSpeech2. Through model lightweighting, GPU parallel inference, and load balancing optimization, the system validates its robustness and generalizability across English and Chinese corpora in cross-linguistic tests. The modular architecture and dynamic scheduling ensure scalability and efficiency, enabling a more humanized and immersive interaction experience in typical application scenarios such as psychological companionship, intelligent education, and high-concurrency customer service. This study provides an effective technical pathway for developing the next generation of personalized and immersive affective intelligent interaction systems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 1203 KB  
Review
Perception and Monitoring of Sign Language Acquisition for Avatar Technologies: A Rapid Focused Review (2020–2025)
by Khansa Chemnad and Achraf Othman
Multimodal Technol. Interact. 2025, 9(8), 82; https://doi.org/10.3390/mti9080082 - 14 Aug 2025
Viewed by 481
Abstract
Sign language avatar systems have emerged as a promising solution to bridge communication gaps where human sign language interpreters are unavailable. However, the design of these avatars often fails to account for the diversity in how users acquire and perceive sign language. This [...] Read more.
Sign language avatar systems have emerged as a promising solution to bridge communication gaps where human sign language interpreters are unavailable. However, the design of these avatars often fails to account for the diversity in how users acquire and perceive sign language. This study presents a rapid review of 17 empirical studies (2020–2025) to synthesize how linguistic and cognitive variability affects sign language perception and how these findings can guide avatar development. We extracted and synthesized key constructs, participant profiles, and capture techniques relevant to avatar fidelity. This review finds that delayed exposure to sign language is consistently linked to persistent challenges in syntactic processing, classifier use, and avatar comprehension. In contrast, early-exposed signers demonstrate more robust parsing and greater tolerance of perceptual irregularities. Key perceptual features, such as smooth transitions between signs, expressive facial cues for grammatical clarity, and consistent spatial placement of referents, emerge as critical for intelligibility, particularly for late learners. These findings highlight the importance of participatory design and user-centered validation in advancing accessible, culturally responsive human–computer interaction through next-generation avatar systems. Full article
Show Figures

Figure 1

21 pages, 1112 KB  
Article
Evaluative Grammar and Non-Standard Comparatives: A Cross-Linguistic Analysis of Ukrainian and English
by Oksana Kovtun
Languages 2025, 10(8), 191; https://doi.org/10.3390/languages10080191 - 6 Aug 2025
Viewed by 489
Abstract
This study examines non-standard comparative and superlative adjective forms in Ukrainian and English, emphasizing their evaluative meanings and grammatical deviations. While prescriptive grammar dictates conventional comparison patterns, modern discourse—particularly in advertising, informal communication, and literary texts—exhibits an increasing prevalence of innovative comparative structures. [...] Read more.
This study examines non-standard comparative and superlative adjective forms in Ukrainian and English, emphasizing their evaluative meanings and grammatical deviations. While prescriptive grammar dictates conventional comparison patterns, modern discourse—particularly in advertising, informal communication, and literary texts—exhibits an increasing prevalence of innovative comparative structures. Using a corpus-based approach, this research identifies patterns of positive and negative evaluative meanings, revealing that positive evaluations dominate non-standard comparatives in both languages, particularly in advertising (English: 78.5%, Ukrainian: 80.2%). However, English exhibits a higher tolerance for grammatical flexibility, while Ukrainian maintains a more restricted use, primarily in commercial and expressive discourse. The findings highlight the pragmatic and evaluative functions of such constructions, including hyperbolic emphasis, rhetorical contrast, and branding strategies. These insights contribute to research on comparative grammar, sentiment analysis, and natural language processing, particularly in modeling evaluative structures in computational linguistics. Full article
Show Figures

Figure 1

23 pages, 1650 KB  
Article
Generative AI-Enhanced Virtual Reality Simulation for Pre-Service Teacher Education: A Mixed-Methods Analysis of Usability and Instructional Utility for Course Integration
by Sumin Hong, Jewoong Moon, Taeyeon Eom, Idowu David Awoyemi and Juno Hwang
Educ. Sci. 2025, 15(8), 997; https://doi.org/10.3390/educsci15080997 - 5 Aug 2025
Viewed by 820
Abstract
Teacher education faces persistent challenges, including limited access to authentic field experiences and a disconnect between theoretical instruction and classroom practice. While virtual reality (VR) simulations offer an alternative, most are constrained by inflexible design and lack scalability, failing to mirror the complexity [...] Read more.
Teacher education faces persistent challenges, including limited access to authentic field experiences and a disconnect between theoretical instruction and classroom practice. While virtual reality (VR) simulations offer an alternative, most are constrained by inflexible design and lack scalability, failing to mirror the complexity of real teaching environments. This study introduces TeacherGen@i, a generative AI (GenAI)-enhanced VR simulation designed to provide pre-service teachers with immersive, adaptive teaching practice through realistic GenAI agents. Using an explanatory case study with a mixed-methods approach, the study examines the simulation’s usability, design challenges, and instructional utility within a university-based teacher preparation course. Data sources included usability surveys and reflective journals, analyzed through thematic coding and computational linguistic analysis using LIWC. Findings suggest that TeacherGen@i facilitates meaningful development of teaching competencies such as instructional decision-making, classroom communication, and student engagement, while also identifying notable design limitations related to cognitive load, user interface design, and instructional scaffolding. This exploratory research offers preliminary insights into the integration of generative AI in teacher simulations and its potential to support responsive and scalable simulation-based learning environments. Full article
Show Figures

Figure 1

Back to TopTop