Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (247)

Search Parameters:
Keywords = neural machine translation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1626 KiB  
Review
Artificial Intelligence for Predicting Insolvency in the Construction Industry—A Systematic Review and Empirical Feature Derivation
by Janappriya Jayawardana, Pabasara Wijeratne, Zora Vrcelj and Malindu Sandanayake
Buildings 2025, 15(17), 2988; https://doi.org/10.3390/buildings15172988 - 22 Aug 2025
Abstract
The construction sector is particularly prone to financial instability, with insolvencies occurring more frequently among micro- and small-scale firms. The current study explores the application of artificial intelligence (AI) and machine learning (ML) models for predicting insolvency within this sector. The research combined [...] Read more.
The construction sector is particularly prone to financial instability, with insolvencies occurring more frequently among micro- and small-scale firms. The current study explores the application of artificial intelligence (AI) and machine learning (ML) models for predicting insolvency within this sector. The research combined a structured literature review with empirical analysis of construction sector-level insolvency data spanning the recent decade. A critical review of studies highlighted a clear shift from traditional statistical methods to AI/ML-driven approaches, with ensemble learning, neural networks, and hybrid learning models demonstrating superior predictive accuracy and robustness. While current predictive models mostly rely on financial ratio-based inputs, this research complements this foundation by introducing additional sector-specific variables. Empirical analysis reveals persistent patterns of distress, with micro- and small-sized construction businesses accounting for approximately 92% to 96% of insolvency cases each year in the Australian construction sector. Key risk signals such as firm size, cash flow risks, governance breaches and capital adequacy issues were translated into practical features that may enhance the predictive sensitivity of the existing models. The study also emphasises the need for digital self-assessment tools to support micro- and small-scale contractors in evaluating their financial health. By transforming predictive insights into accessible, real-time evaluations, such tools can facilitate early interventions and reduce the risk of insolvency among vulnerable construction firms. The current study combines insights from the review of AI/ML insolvency prediction models with sector-specific feature derivation, potentially providing a foundation for future research and practical adaptation in the construction context. Full article
28 pages, 1970 KiB  
Review
Artificial Intelligence in Alzheimer’s Disease Diagnosis and Prognosis Using PET-MRI: A Narrative Review of High-Impact Literature Post-Tauvid Approval
by Rafail C. Christodoulou, Amanda Woodward, Rafael Pitsillos, Reina Ibrahim and Michalis F. Georgiou
J. Clin. Med. 2025, 14(16), 5913; https://doi.org/10.3390/jcm14165913 - 21 Aug 2025
Abstract
Background: Artificial intelligence (AI) is reshaping neuroimaging workflows for Alzheimer’s disease (AD) diagnosis, particularly through PET and MRI analysis advances. Since the FDA approval of Tauvid, a PET tracer targeting tau pathology, there has been a notable increase in studies applying AI to [...] Read more.
Background: Artificial intelligence (AI) is reshaping neuroimaging workflows for Alzheimer’s disease (AD) diagnosis, particularly through PET and MRI analysis advances. Since the FDA approval of Tauvid, a PET tracer targeting tau pathology, there has been a notable increase in studies applying AI to neuroimaging data. This narrative review synthesizes recent, high-impact literature to highlight clinically relevant AI applications in AD imaging. Methods: This review examined peer-reviewed studies published between January 2020 and January 2025, focusing on the use of AI, including machine learning, deep learning, and hybrid models for diagnostic and prognostic tasks in AD using PET and/or MRI. Studies were identified through targeted PubMed, Scopus, and Embase searches, emphasizing methodological diversity and clinical relevance. Results: A total of 111 studies were categorized into five thematic areas: Image preprocessing and segmentation, diagnostic classification, prognosis and disease staging, multimodal data fusion, and emerging innovations. Deep learning models such as convolutional neural networks (CNNs), generative adversarial networks (GANs), and transformer-based architectures were widely employed by the research community in the field of AD. At the same time, several models reported strong diagnostic performance, but methodological challenges such as reproducibility, small sample sizes, and lack of external validation limit clinical translation. Trends in explainable AI, synthetic imaging, and integration of clinical biomarkers are also discussed. Conclusions: AI is rapidly advancing the field of AD imaging, offering tools for enhanced segmentation, staging, and early diagnosis. Multimodal approaches and biomarker-guided models show particular promise. However, future research must focus on reproducibility, interpretability, and standardized validation to bridge the gap between research and clinical practice. Full article
Show Figures

Figure 1

42 pages, 2529 KiB  
Review
Artificial Intelligence in Sports Biomechanics: A Scoping Review on Wearable Technology, Motion Analysis, and Injury Prevention
by Marouen Souaifi, Wissem Dhahbi, Nidhal Jebabli, Halil İbrahim Ceylan, Manar Boujabli, Raul Ioan Muntean and Ismail Dergaa
Bioengineering 2025, 12(8), 887; https://doi.org/10.3390/bioengineering12080887 - 20 Aug 2025
Abstract
Aim: This scoping review examines the application of artificial intelligence (AI) in sports biomechanics, with a focus on enhancing performance and preventing injuries. The review addresses key research questions, including primary AI methods, their effectiveness in improving athletic performance, their potential for injury [...] Read more.
Aim: This scoping review examines the application of artificial intelligence (AI) in sports biomechanics, with a focus on enhancing performance and preventing injuries. The review addresses key research questions, including primary AI methods, their effectiveness in improving athletic performance, their potential for injury prediction, sport-specific applications, strategies for translating knowledge, ethical considerations, and remaining research gaps. Following the PRISMA-ScR guidelines, a comprehensive literature search was conducted across five databases (PubMed/MEDLINE, Web of Science, IEEE Xplore, Scopus, and SPORTDiscus), encompassing studies published between January 2015 and December 2024. After screening 3248 articles, 73 studies met the inclusion criteria (Cohen’s kappa = 0.84). Data were collected on AI techniques, biomechanical parameters, performance metrics, and implementation details. Results revealed a shift from traditional statistical models to advanced machine learning methods. Based on moderate-quality evidence from 12 studies, convolutional neural networks reached 94% agreement with international experts in technique assessment. Computer vision demonstrated accuracy within 15 mm compared to marker-based systems (6 studies, moderate quality). AI-driven training plans showed 25% accuracy improvements (4 studies, limited evidence). Random forest models predicted hamstring injuries with 85% accuracy (3 studies, moderate quality). Learning management systems enhanced knowledge transfer, raising coaches’ understanding by 45% and athlete adherence by 3.4 times. Implementing integrated AI systems resulted in a 23% reduction in reinjury rates. However, significant challenges remain, including standardizing data, improving model interpretability, validating models in real-world settings, and integrating them into coaching routines. In summary, incorporating AI into sports biomechanics marks a groundbreaking advancement, providing analytical capabilities that surpass traditional techniques. Future research should focus on creating explainable AI, applying rigorous validation methods, handling data ethically, and ensuring equitable access to promote the widespread and responsible use of AI across all levels of competitive sports. Full article
(This article belongs to the Section Biomechanics and Sports Medicine)
Show Figures

Figure 1

16 pages, 489 KiB  
Article
Integrating Hybrid AI Approaches for Enhanced Translation in Minority Languages
by Chen-Chi Chang, Yu-Hsun Lin, Yun-Hsiang Hsu and I-Hsin Fan
Appl. Sci. 2025, 15(16), 9039; https://doi.org/10.3390/app15169039 - 15 Aug 2025
Viewed by 246
Abstract
This study presents a hybrid artificial intelligence model designed to enhance translation quality for low-resource languages, specifically targeting the Hakka language. The proposed model integrates phrase-based machine translation (PBMT) and neural machine translation (NMT) within a recursive learning framework. The methodology consists of [...] Read more.
This study presents a hybrid artificial intelligence model designed to enhance translation quality for low-resource languages, specifically targeting the Hakka language. The proposed model integrates phrase-based machine translation (PBMT) and neural machine translation (NMT) within a recursive learning framework. The methodology consists of three key stages: (1) initial translation using PBMT, where Hakka corpus data is structured into a parallel dataset; (2) NMT training with Transformers, leveraging the generated parallel corpus to train deep learning models; and (3) recursive translation refinement, where iterative translations further enhance model accuracy by expanding the training dataset. This study employs preprocessing techniques to clean and optimize the dataset, reducing noise and improving sentence segmentation. A BLEU score evaluation is conducted to compare the effectiveness of PBMT and NMT across various corpus sizes, demonstrating that while PBMT performs well with limited data, the Transformer-based NMT achieves superior results as training data increases. The findings highlight the advantages of a hybrid approach in overcoming data scarcity challenges for minority languages. This research contributes to machine translation methodologies by proposing a scalable framework for improving linguistic accessibility in under-resourced languages. Full article
(This article belongs to the Special Issue The Advanced Trends in Natural Language Processing)
Show Figures

Figure 1

19 pages, 821 KiB  
Article
Multimodal Multisource Neural Machine Translation: Building Resources for Image Caption Translation from European Languages into Arabic
by Roweida Mohammed, Inad Aljarrah, Mahmoud Al-Ayyoub and Ali Fadel
Computation 2025, 13(8), 194; https://doi.org/10.3390/computation13080194 - 8 Aug 2025
Viewed by 292
Abstract
Neural machine translation (NMT) models combining textual and visual inputs generate more accurate translations compared with unimodal models. Moreover, translation models with an under-resourced target language benefit from multisource inputs (source sentences are provided in different languages). Building MultiModal MutliSource NMT (M3 [...] Read more.
Neural machine translation (NMT) models combining textual and visual inputs generate more accurate translations compared with unimodal models. Moreover, translation models with an under-resourced target language benefit from multisource inputs (source sentences are provided in different languages). Building MultiModal MutliSource NMT (M3S-NMT) systems require significant efforts to curate datasets suitable for such a multifaceted task. This work uses image caption translation as an example of multimodal translation and presents a novel public dataset for translating captions from multiple European languages (viz., English, German, French, and Czech) into the distant and under-resourced Arabic language. Moreover, it presents multitask learning models trained and tested on this dataset to serve as solid baselines to help further research in this area. These models involve two parts: one for learning the visual representations of the input images, and the other for translating the textual input based on these representations. The translations are produced from a framework of attention-based encoder–decoder architectures. The visual features are learned from a pretrained convolutional neural network (CNN). These features are then integrated with textual features learned through the very basic yet well-known recurrent neural networks (RNNs) with GloVe or BERT word embeddings. Despite the challenges associated with the task at hand, the results of these systems are very promising, reaching 34.57 and 42.52 METEOR scores. Full article
(This article belongs to the Section Computational Social Science)
Show Figures

Figure 1

23 pages, 1115 KiB  
Article
Research on Mongolian–Chinese Neural Machine Translation Based on Implicit Linguistic Features and Deliberation Networks
by Qingdaoerji Ren, Shike Li, Xuerong Wei, Yatu Ji and Nier Wu
Electronics 2025, 14(15), 3144; https://doi.org/10.3390/electronics14153144 - 7 Aug 2025
Viewed by 449
Abstract
Sequence-to-sequence neural machine translation (NMT) has achieved great success with many language pairs. However, its performance remains constrained in low-resource settings such as Mongolian–Chinese translation due to its strong reliance on large-scale parallel corpora. To address this issue, we propose ILFDN-Transformer, a Mongolian–Chinese [...] Read more.
Sequence-to-sequence neural machine translation (NMT) has achieved great success with many language pairs. However, its performance remains constrained in low-resource settings such as Mongolian–Chinese translation due to its strong reliance on large-scale parallel corpora. To address this issue, we propose ILFDN-Transformer, a Mongolian–Chinese NMT model that integrates implicit language features and a deliberation network to improve translation quality under limited-resource conditions. Specifically, we leverage the BART pre-trained language model to capture deep semantic representations of source sentences and apply knowledge distillation to integrate the resulting implicit linguistic features into the Transformer encoder to provide enhanced semantic support. During decoding, we introduce a deliberation mechanism that guides the generation process by referencing linguistic knowledge encoded in a multilingual pre-trained model, therefore improving the fluency and coherence of target translations. Furthermore, considering the flexible word order characteristics of the Mongolian language, we propose a Mixed Positional Encoding (MPE) method that combines absolute positional encoding with LSTM-based dynamic encoding, enabling the model to better adapt to complex syntactic variations. Experimental results show that ILFDN-Transformer achieves a BLEU score improvement of 3.53 compared to the baseline Transformer model, fully demonstrating the effectiveness of our proposed method. Full article
Show Figures

Figure 1

19 pages, 753 KiB  
Article
In-Context Learning for Low-Resource Machine Translation: A Study on Tarifit with Large Language Models
by Oussama Akallouch and Khalid Fardousse
Algorithms 2025, 18(8), 489; https://doi.org/10.3390/a18080489 - 6 Aug 2025
Viewed by 375
Abstract
This study presents the first systematic evaluation of in-context learning for Tarifit machine translation, a low-resource Amazigh language spoken by 5 million people in Morocco and Europe. We assess three large language models (GPT-4, Claude-3.5, PaLM-2) across Tarifit–Arabic, Tarifit–French, and Tarifit–English translation using [...] Read more.
This study presents the first systematic evaluation of in-context learning for Tarifit machine translation, a low-resource Amazigh language spoken by 5 million people in Morocco and Europe. We assess three large language models (GPT-4, Claude-3.5, PaLM-2) across Tarifit–Arabic, Tarifit–French, and Tarifit–English translation using 1000 sentence pairs and 5-fold cross-validation. Results show that 8-shot similarity-based demonstration selection achieves optimal performance. GPT-4 achieved 20.2 BLEU for Tarifit–Arabic, 14.8 for Tarifit–French, and 10.9 for Tarifit–English. Linguistic proximity significantly impacts translation quality, with Tarifit–Arabic substantially outperforming other language pairs by 8.4 BLEU points due to shared vocabulary and morphological patterns. Error analysis reveals systematic issues with morphological complexity (42% of errors) and cultural terminology preservation (18% of errors). This work establishes baseline benchmarks for Tarifit translation and demonstrates the viability of in-context learning for morphologically complex low-resource languages, contributing to linguistic equity in AI systems. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

18 pages, 640 KiB  
Article
Fine-Tuning Methods and Dataset Structures for Multilingual Neural Machine Translation: A Kazakh–English–Russian Case Study in the IT Domain
by Zhanibek Kozhirbayev and Zhandos Yessenbayev
Electronics 2025, 14(15), 3126; https://doi.org/10.3390/electronics14153126 - 6 Aug 2025
Viewed by 368
Abstract
This study explores fine-tuning methods and dataset structures for multilingual neural machine translation using the No Language Left Behind model, with a case study on Kazakh, English, and Russian. We compare single-stage and two-stage fine-tuning approaches, as well as triplet versus non-triplet dataset [...] Read more.
This study explores fine-tuning methods and dataset structures for multilingual neural machine translation using the No Language Left Behind model, with a case study on Kazakh, English, and Russian. We compare single-stage and two-stage fine-tuning approaches, as well as triplet versus non-triplet dataset configurations, to improve translation quality. A high-quality, 50,000-triplet dataset in information technology domain, manually translated and expert-validated, serves as the in-domain benchmark, complemented by out-of-domain corpora like KazParC. Evaluations using BLEU, chrF, METEOR, and TER metrics reveal that single-stage fine-tuning excels for low-resource pairs (e.g., 0.48 BLEU, 0.77 chrF for Kazakh → Russian), while two-stage fine-tuning benefits high-resource pairs (Russian → English). Triplet datasets improve cross-linguistic consistency compared with non-triplet structures. Our reproducible framework offers practical guidance for adapting neural machine translation to technical domains and low-resource languages. Full article
Show Figures

Figure 1

24 pages, 624 KiB  
Review
Integrating Artificial Intelligence into Perinatal Care Pathways: A Scoping Review of Reviews of Applications, Outcomes, and Equity
by Rabie Adel El Arab, Omayma Abdulaziz Al Moosa, Zahraa Albahrani, Israa Alkhalil, Joel Somerville and Fuad Abuadas
Nurs. Rep. 2025, 15(8), 281; https://doi.org/10.3390/nursrep15080281 - 31 Jul 2025
Viewed by 448
Abstract
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping [...] Read more.
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping review of reviews of AI/ML applications spanning reproductive, prenatal, postpartum, neonatal, and early child-development care. Methods: We searched PubMed, Embase, the Cochrane Library, Web of Science, and Scopus through April 2025. Two reviewers independently screened records, extracted data, and assessed methodological quality using AMSTAR 2 for systematic reviews, ROBIS for bias assessment, SANRA for narrative reviews, and JBI guidance for scoping reviews. Results: Thirty-nine reviews met our inclusion criteria. In preconception and fertility treatment, convolutional neural network-based platforms can identify viable embryos and key sperm parameters with over 90 percent accuracy, and machine-learning models can personalize follicle-stimulating hormone regimens to boost mature oocyte yield while reducing overall medication use. Digital sexual-health chatbots have enhanced patient education, pre-exposure prophylaxis adherence, and safer sexual behaviors, although data-privacy safeguards and bias mitigation remain priorities. During pregnancy, advanced deep-learning models can segment fetal anatomy on ultrasound images with more than 90 percent overlap compared to expert annotations and can detect anomalies with sensitivity exceeding 93 percent. Predictive biometric tools can estimate gestational age within one week with accuracy and fetal weight within approximately 190 g. In the postpartum period, AI-driven decision-support systems and conversational agents can facilitate early screening for depression and can guide follow-up care. Wearable sensors enable remote monitoring of maternal blood pressure and heart rate to support timely clinical intervention. Within neonatal care, the Heart Rate Observation (HeRO) system has reduced mortality among very low-birth-weight infants by roughly 20 percent, and additional AI models can predict neonatal sepsis, retinopathy of prematurity, and necrotizing enterocolitis with area-under-the-curve values above 0.80. From an operational standpoint, automated ultrasound workflows deliver biometric measurements at about 14 milliseconds per frame, and dynamic scheduling in IVF laboratories lowers staff workload and per-cycle costs. Home-monitoring platforms for pregnant women are associated with 7–11 percent reductions in maternal mortality and preeclampsia incidence. Despite these advances, most evidence derives from retrospective, single-center studies with limited external validation. Low-resource settings, especially in Sub-Saharan Africa, remain under-represented, and few AI solutions are fully embedded in electronic health records. Conclusions: AI holds transformative promise for perinatal care but will require prospective multicenter validation, equity-centered design, robust governance, transparent fairness audits, and seamless electronic health record integration to translate these innovations into routine practice and improve maternal and neonatal outcomes. Full article
Show Figures

Figure 1

31 pages, 2007 KiB  
Review
Artificial Intelligence-Driven Strategies for Targeted Delivery and Enhanced Stability of RNA-Based Lipid Nanoparticle Cancer Vaccines
by Ripesh Bhujel, Viktoria Enkmann, Hannes Burgstaller and Ravi Maharjan
Pharmaceutics 2025, 17(8), 992; https://doi.org/10.3390/pharmaceutics17080992 - 30 Jul 2025
Cited by 1 | Viewed by 1435
Abstract
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the [...] Read more.
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the AI’s impact on LNP engineering through machine learning-driven predictive models, generative adversarial networks (GANs) for novel lipid design, and neural network-enhanced biodistribution prediction. AI reduces the therapeutic development timeline through accelerated virtual screening of millions of lipid combinations, compared to conventional high-throughput screening. Furthermore, AI-optimized LNPs demonstrate improved tumor targeting. GAN-generated lipids show structural novelty while maintaining higher encapsulation efficiency; graph neural networks predict RNA-LNP binding affinity with high accuracy vs. experimental data; digital twins reduce lyophilization optimization from years to months; and federated learning models enable multi-institutional data sharing. We propose a framework to address key technical challenges: training data quality (min. 15,000 lipid structures), model interpretability (SHAP > 0.65), and regulatory compliance (21CFR Part 11). AI integration reduces manufacturing costs and makes personalized cancer vaccine affordable. Future directions need to prioritize quantum machine learning for stability prediction and edge computing for real-time formulation modifications. Full article
Show Figures

Figure 1

21 pages, 599 KiB  
Review
Radiomics Beyond Radiology: Literature Review on Prediction of Future Liver Remnant Volume and Function Before Hepatic Surgery
by Fabrizio Urraro, Giulia Pacella, Nicoletta Giordano, Salvatore Spiezia, Giovanni Balestrucci, Corrado Caiazzo, Claudio Russo, Salvatore Cappabianca and Gianluca Costa
J. Clin. Med. 2025, 14(15), 5326; https://doi.org/10.3390/jcm14155326 - 28 Jul 2025
Viewed by 413
Abstract
Background: Post-hepatectomy liver failure (PHLF) is the most worrisome complication after a major hepatectomy and is the leading cause of postoperative mortality. The most important predictor of PHLF is the future liver remnant (FLR), the volume of the liver that will remain after [...] Read more.
Background: Post-hepatectomy liver failure (PHLF) is the most worrisome complication after a major hepatectomy and is the leading cause of postoperative mortality. The most important predictor of PHLF is the future liver remnant (FLR), the volume of the liver that will remain after the hepatectomy, representing a major concern for hepatobiliary surgeons, radiologists, and patients. Therefore, an accurate preoperative assessment of the FLR and the prediction of PHLF are crucial to minimize risks and enhance patient outcomes. Recent radiomics and deep learning models show potential in predicting PHLF and the FLR by integrating imaging and clinical data. However, most studies lack external validation and methodological homogeneity and rely on small, single-center cohorts. This review outlines current CT-based approaches for surgical risk stratification and key limitations hindering clinical translation. Methods: A literature analysis was performed on the PubMed Dataset. We reviewed original articles using the subsequent keywords: [(Artificial intelligence OR radiomics OR machine learning OR deep learning OR neural network OR texture analysis) AND liver resection AND CT]. Results: Of 153 pertinent papers found, we underlined papers about the prediction of PHLF and about the FLR. Models were built according to machine learning (ML) and deep learning (DL) automatic algorithms. Conclusions: Radiomics models seem reliable and applicable to clinical practice in the preoperative prediction of PHLF and the FLR in patients undergoing major liver surgery. Further studies are required to achieve larger validation cohorts. Full article
(This article belongs to the Special Issue Advances in Gastroenterological Surgery)
Show Figures

Figure 1

81 pages, 4295 KiB  
Systematic Review
Leveraging AI-Driven Neuroimaging Biomarkers for Early Detection and Social Function Prediction in Autism Spectrum Disorders: A Systematic Review
by Evgenia Gkintoni, Maria Panagioti, Stephanos P. Vassilopoulos, Georgios Nikolaou, Basilis Boutsinas and Apostolos Vantarakis
Healthcare 2025, 13(15), 1776; https://doi.org/10.3390/healthcare13151776 - 22 Jul 2025
Viewed by 1298
Abstract
Background: This systematic review examines artificial intelligence (AI) applications in neuroimaging for autism spectrum disorder (ASD), addressing six research questions regarding biomarker optimization, modality integration, social function prediction, developmental trajectories, clinical translation challenges, and multimodal data enhancement for earlier detection and improved [...] Read more.
Background: This systematic review examines artificial intelligence (AI) applications in neuroimaging for autism spectrum disorder (ASD), addressing six research questions regarding biomarker optimization, modality integration, social function prediction, developmental trajectories, clinical translation challenges, and multimodal data enhancement for earlier detection and improved outcomes. Methods: Following PRISMA guidelines, we conducted a comprehensive literature search across 8 databases, yielding 146 studies from an initial 1872 records. These studies were systematically analyzed to address key questions regarding AI neuroimaging approaches in ASD detection and prognosis. Results: Neuroimaging combined with AI algorithms demonstrated significant potential for early ASD detection, with electroencephalography (EEG) showing promise. Machine learning classifiers achieved high diagnostic accuracy (85–99%) using features derived from neural oscillatory patterns, connectivity measures, and signal complexity metrics. Studies of infant populations have identified the 9–12-month developmental window as critical for biomarker detection and the onset of behavioral symptoms. Multimodal approaches that integrate various imaging techniques have substantially enhanced predictive capabilities, while longitudinal analyses have shown potential for tracking developmental trajectories and treatment responses. Conclusions: AI-driven neuroimaging biomarkers represent a promising frontier in ASD research, potentially enabling the detection of symptoms before they manifest behaviorally and providing objective measures of intervention efficacy. While technical and methodological challenges remain, advancements in standardization, diverse sampling, and clinical validation could facilitate the translation of findings into practice, ultimately supporting earlier intervention during critical developmental periods and improving outcomes for individuals with ASD. Future research should prioritize large-scale validation studies and standardized protocols to realize the full potential of precision medicine in ASD. Full article
Show Figures

Graphical abstract

17 pages, 1467 KiB  
Article
Confidence-Based Knowledge Distillation to Reduce Training Costs and Carbon Footprint for Low-Resource Neural Machine Translation
by Maria Zafar, Patrick J. Wall, Souhail Bakkali and Rejwanul Haque
Appl. Sci. 2025, 15(14), 8091; https://doi.org/10.3390/app15148091 - 21 Jul 2025
Viewed by 648
Abstract
The transformer-based deep learning approach represents the current state-of-the-art in machine translation (MT) research. Large-scale pretrained transformer models produce state-of-the-art performance across a wide range of MT tasks for many languages. However, such deep neural network (NN) models are often data-, compute-, space-, [...] Read more.
The transformer-based deep learning approach represents the current state-of-the-art in machine translation (MT) research. Large-scale pretrained transformer models produce state-of-the-art performance across a wide range of MT tasks for many languages. However, such deep neural network (NN) models are often data-, compute-, space-, power-, and energy-hungry, typically requiring powerful GPUs or large-scale clusters to train and deploy. As a result, they are often regarded as “non-green” and “unsustainable” technologies. Distilling knowledge from large deep NN models (teachers) to smaller NN models (students) is a widely adopted sustainable development approach in MT as well as in broader areas of natural language processing (NLP), including speech, and image processing. However, distilling large pretrained models presents several challenges. First, increased training time and cost that scales with the volume of data used for training a student model. This could pose a challenge for translation service providers (TSPs), as they may have limited budgets for training. Moreover, CO2 emissions generated during model training are typically proportional to the amount of data used, contributing to environmental harm. Second, when querying teacher models, including encoder–decoder models such as NLLB, the translations they produce for low-resource languages may be noisy or of low quality. This can undermine sequence-level knowledge distillation (SKD), as student models may inherit and reinforce errors from inaccurate labels. In this study, the teacher model’s confidence estimation is employed to filter those instances from the distilled training data for which the teacher exhibits low confidence. We tested our methods on a low-resource Urdu-to-English translation task operating within a constrained training budget in an industrial translation setting. Our findings show that confidence estimation-based filtering can significantly reduce the cost and CO2 emissions associated with training a student model without drop in translation quality, making it a practical and environmentally sustainable solution for the TSPs. Full article
(This article belongs to the Special Issue Deep Learning and Its Applications in Natural Language Processing)
Show Figures

Figure 1

27 pages, 6102 KiB  
Article
The Impact of Wind Speed on Electricity Prices in the Polish Day-Ahead Market Since 2016, and Its Applicability to Machine-Learning-Powered Price Prediction
by Rafał Sowiński and Aleksandra Komorowska
Energies 2025, 18(14), 3749; https://doi.org/10.3390/en18143749 - 15 Jul 2025
Viewed by 382
Abstract
The rising share of wind generation in power systems, driven by the need to decarbonise the energy sector, is changing the relationship between wind speed and electricity prices. In the case of Poland, this relationship has not been thoroughly investigated, particularly in the [...] Read more.
The rising share of wind generation in power systems, driven by the need to decarbonise the energy sector, is changing the relationship between wind speed and electricity prices. In the case of Poland, this relationship has not been thoroughly investigated, particularly in the aftermath of the restrictive legal changes introduced in 2016, which halted numerous onshore wind investments. Studying this relationship remains necessary to understand the broader market effects of wind speed on electricity prices, especially considering evolving policies and growing interest in renewable energy integration. In this context, this paper analyses wind speed, wind generation, and other relevant datasets in relation to electricity prices using multiple statistical methods, including correlation analysis, regression modelling, and artificial neural networks. The results show that wind speed is a significant factor in setting electricity prices (with a correlation coefficient reaching up to −0.7). The findings indicate that not only is it important to include wind speed as an electricity price indicator, but it is also worth investing in wind generation, since higher wind output can be translated into lower electricity prices. This study contributes to a better understanding of how natural variability in renewable resources translates into electricity market outcomes under policy-constrained conditions. Its innovative aspect lies in combining statistical and machine learning techniques to quantify the influence of wind speed on electricity prices, using updated data from a period of regulatory stagnation. Full article
Show Figures

Figure 1

24 pages, 939 KiB  
Review
Advances in Amazigh Language Technologies: A Comprehensive Survey Across Processing Domains
by Oussama Akallouch, Mohammed Akallouch and Khalid Fardousse
Information 2025, 16(7), 600; https://doi.org/10.3390/info16070600 - 13 Jul 2025
Viewed by 805
Abstract
The Amazigh language, spoken by millions across North Africa, presents unique computational challenges due to its complex morphological system, dialectal variation, and multiple writing systems. This survey examines technological advances over the past decade across four key domains: natural language processing, speech recognition, [...] Read more.
The Amazigh language, spoken by millions across North Africa, presents unique computational challenges due to its complex morphological system, dialectal variation, and multiple writing systems. This survey examines technological advances over the past decade across four key domains: natural language processing, speech recognition, optical character recognition, and machine translation. We analyze the evolution from rule-based systems to advanced neural models, demonstrating how researchers have addressed resource constraints through innovative approaches that blend linguistic knowledge with machine learning. Our analysis reveals uneven progress across domains, with optical character recognition reaching high maturity levels while machine translation remains constrained by limited parallel data. Beyond technical metrics, we explore applications in education, cultural preservation, and digital accessibility, showing how these technologies enable Amazigh speakers to participate in the digital age. This work illustrates that advancing language technology for marginalized languages requires fundamentally different approaches that respect linguistic diversity while ensuring digital equity. Full article
Show Figures

Figure 1

Back to TopTop