Processing math: 0%
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,898)

Search Parameters:
Keywords = automated assessment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 666 KiB  
Article
DeepSeek-V3, GPT-4, Phi-4, and LLaMA-3.3 Generate Correct Code for LoRaWAN-Related Engineering Tasks
by Daniel Fernandes, João P. Matos-Carvalho, Carlos M. Fernandes and Nuno Fachada
Electronics 2025, 14(7), 1428; https://doi.org/10.3390/electronics14071428 (registering DOI) - 1 Apr 2025
Abstract
This paper investigates the performance of 16 Large Language Models (LLMs) in automating LoRaWAN-related engineering tasks involving optimal placement of drones and received power calculation under progressively complex zero-shot, natural language prompts. The primary research question is whether lightweight, locally executed LLMs can [...] Read more.
This paper investigates the performance of 16 Large Language Models (LLMs) in automating LoRaWAN-related engineering tasks involving optimal placement of drones and received power calculation under progressively complex zero-shot, natural language prompts. The primary research question is whether lightweight, locally executed LLMs can generate correct Python code for these tasks. To assess this, we compared locally run models against state-of-the-art alternatives, such as GPT-4 and DeepSeek-V3, which served as reference points. By extracting and executing the Python functions generated by each model, we evaluated their outputs on a zero-to-five scale. Results show that while DeepSeek-V3 and GPT-4 consistently provided accurate solutions, certain smaller models—particularly Phi-4 and LLaMA-3.3—also demonstrated strong performance, underscoring the viability of lightweight alternatives. Other models exhibited errors stemming from incomplete understanding or syntactic issues. These findings illustrate the potential of LLM-based approaches for specialized engineering applications while highlighting the need for careful model selection, rigorous prompt design, and targeted domain fine-tuning to achieve reliable outcomes. Full article
25 pages, 741 KiB  
Article
A Transformer-Based Model for the Automatic Detection of Administrative Burdens in Transposed Legislative Documents
by Victor Costa, Mauro Castelli and Pedro Coelho
Technologies 2025, 13(4), 134; https://doi.org/10.3390/technologies13040134 (registering DOI) - 1 Apr 2025
Abstract
Legislative impact assessment (LIA) can be defined as the process performed by governments and legislative bodies to evaluate the potential effects of proposed policies or directives before they are implemented. This assessment typically covers various aspects (including economic, social, and environmental impacts) and [...] Read more.
Legislative impact assessment (LIA) can be defined as the process performed by governments and legislative bodies to evaluate the potential effects of proposed policies or directives before they are implemented. This assessment typically covers various aspects (including economic, social, and environmental impacts) and is designed to ensure that policy proposals are well-founded, transparent, and that potential impacts are thoroughly examined before decisions are made. This process is nowadays performed by human experts and requires a significant amount of time. It is also characterized by some subjectivity that makes it difficult for citizens and companies to perceive the process as a transparent one. Moreover, public administration services responsible for LIA recognize significant difficulties in performing a timely and effective impact assessment exercise due to the lack of human and financial resources. To answer this call, this paper presents an artificial intelligence-based system to automatizing part of the impact assessment process, with the specific objective of detecting administrative burdens from transposed EU legislation. The system is built on a fine-tuned, transformer-based architecture leveraging transfer learning, making it an innovative tool for automating legislative impact assessment. Comprehensive testing on transposed European legislation demonstrated that the system significantly enhances efficiency and accuracy in what has traditionally been a complex and time-consuming task. Full article
Show Figures

Figure 1

17 pages, 2723 KiB  
Article
Automated Detection, Localization, and Severity Assessment of Proximal Dental Caries from Bitewing Radiographs Using Deep Learning
by Mashail Alsolamy, Farrukh Nadeem, Amr Ahmed Azhari and Walaa Magdy Ahmed
Diagnostics 2025, 15(7), 899; https://doi.org/10.3390/diagnostics15070899 (registering DOI) - 1 Apr 2025
Abstract
Background/Objectives: Dental caries is a widespread chronic infection, affecting a large segment of the population. Proximal caries, in particular, present a distinct obstacle for early identification owing to their position, which hinders clinical inspection. Radiographic assessments, particularly bitewing images (BRs), are frequently [...] Read more.
Background/Objectives: Dental caries is a widespread chronic infection, affecting a large segment of the population. Proximal caries, in particular, present a distinct obstacle for early identification owing to their position, which hinders clinical inspection. Radiographic assessments, particularly bitewing images (BRs), are frequently utilized to detect these carious lesions. Nonetheless, misinterpretations may obstruct precise diagnosis. This paper presents a deep-learning-based system to improve the evaluation process by detecting proximal dental caries from BRs and classifying their severity in accordance with ICCMSTM guidelines. Methods: The system comprises three fundamental tasks: caries detection, tooth numbering, and describing caries location by identifying the tooth it belongs to and the surface, each built independently to enable reuse across many applications. We analyzed 1354 BRs annotated by a consultant of restorative dentistry to delineate the pertinent categories, concentrating on the detection and localization of caries tasks. A pre-trained YOLOv11-based instance segmentation model was employed, allocating 80% of the dataset for training, 10% for validation, and the remaining portion for evaluating the model on unseen data. Results: The system attained a precision of 0.844, recall of 0.864, F1-score of 0.851, and mAP of 0.888 for segmenting caries and classifying their severity, using an intersection over union (IoU) of 50% and a confidence threshold of 0.25. Concentrating on teeth that are entirely or three-quarters presented in BRs, the system attained 100% for identifying the affected teeth and surfaces. It achieved high sensitivity and accuracy in comparison to dentist evaluations. Conclusions: The results are encouraging, suggesting that the proposed system may effectively assist dentists in evaluating bitewing images, assessing lesion severity, and recommending suitable treatments. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

21 pages, 33600 KiB  
Article
Pix2Pix-Based Modelling of Urban Morphogenesis and Its Linkage to Local Climate Zones and Urban Heat Islands in Chinese Megacities
by Mo Wang, Ziheng Xiong, Jiayu Zhao, Shiqi Zhou and Qingchan Wang
Land 2025, 14(4), 755; https://doi.org/10.3390/land14040755 (registering DOI) - 1 Apr 2025
Abstract
Accelerated urbanization in China poses significant challenges for developing urban planning strategies that are responsive to diverse climatic conditions. This demands a sophisticated understanding of the complex interactions between 3D urban forms and local climate dynamics. This study employed the Conditional Generative Adversarial [...] Read more.
Accelerated urbanization in China poses significant challenges for developing urban planning strategies that are responsive to diverse climatic conditions. This demands a sophisticated understanding of the complex interactions between 3D urban forms and local climate dynamics. This study employed the Conditional Generative Adversarial Network (cGAN) of the Pix2Pix algorithm as a predictive model to simulate 3D urban morphologies aligned with Local Climate Zone (LCZ) classifications. The research framework comprises four key components: (1) acquisition of LCZ maps and urban form samples from selected Chinese megacities for training, utilizing datasets such as the World Cover database, RiverMap’s building outlines, and integrated satellite data from Landsat 8, Sentinel-1, and Sentinel-2; (2) evaluation of the Pix2Pix algorithm’s performance in simulating urban environments; (3) generation of 3D urban models to demonstrate the model’s capability for automated urban morphology construction, with specific potential for examining urban heat island effects; (4) examination of the model’s adaptability in urban planning contexts in projecting urban morphological transformations. By integrating urban morphological inputs from eight representative Chinese metropolises, the model’s efficacy was assessed both qualitatively and quantitatively, achieving an RMSE of 0.187, an R2 of 0.78, and a PSNR of 14.592. In a generalized test of urban morphology prediction through LCZ classification, exemplified by the case of Zhuhai, results indicated the model’s effectiveness in categorizing LCZ types. In conclusion, the integration of urban morphological data from eight representative Chinese metropolises further confirmed the model’s potential in climate-adaptive urban planning. The findings of this study underscore the potential of generative algorithms based on LCZ types in accurately forecasting urban morphological development, thereby making significant contributions to sustainable and climate-responsive urban planning. Full article
Show Figures

Figure 1

43 pages, 3617 KiB  
Review
AI and Interventional Radiology: A Narrative Review of Reviews on Opportunities, Challenges, and Future Directions
by Andrea Lastrucci, Nicola Iosca, Yannick Wandael, Angelo Barra, Graziano Lepri, Nevio Forini, Renzo Ricci, Vittorio Miele and Daniele Giansanti
Diagnostics 2025, 15(7), 893; https://doi.org/10.3390/diagnostics15070893 (registering DOI) - 1 Apr 2025
Abstract
The integration of artificial intelligence in interventional radiology is an emerging field with transformative potential, aiming to make a great contribution to the health domain. This overview of reviews seeks to identify prevailing themes, opportunities, challenges, and recommendations related to the process of [...] Read more.
The integration of artificial intelligence in interventional radiology is an emerging field with transformative potential, aiming to make a great contribution to the health domain. This overview of reviews seeks to identify prevailing themes, opportunities, challenges, and recommendations related to the process of integration. Utilizing a standardized checklist and quality control procedures, this review examines recent advancements in, and future implications of, this domain. In total, 27 review studies were selected through the systematic process. Based on the overview, the integration of artificial intelligence (AI) in interventional radiology (IR) presents significant opportunities to enhance precision, efficiency, and personalization of procedures. AI automates tasks like catheter manipulation and needle placement, improving accuracy and reducing variability. It also integrates multiple imaging modalities, optimizing treatment planning and outcomes. AI aids intra-procedural guidance with advanced needle tracking and real-time image fusion. Robotics and automation in IR are advancing, though full autonomy in AI-guided systems has not been achieved. Despite these advancements, the integration of AI in IR is complex, involving imaging systems, robotics, and other technologies. This complexity requires a comprehensive certification and integration process. The role of regulatory bodies, scientific societies, and clinicians is essential to address these challenges. Standardized guidelines, clinician education, and careful AI assessment are necessary for safe integration. The future of AI in IR depends on developing standardized guidelines for medical devices and AI applications. Collaboration between certifying bodies, scientific societies, and legislative entities, as seen in the EU AI Act, will be crucial to tackling AI-specific challenges. Focusing on transparency, data governance, human oversight, and post-market monitoring will ensure AI integration in IR proceeds with safeguards, benefiting patient outcomes and advancing the field. Full article
(This article belongs to the Special Issue Artificial Intelligence in Clinical Medical Imaging: 2nd Edition)
Show Figures

Figure 1

19 pages, 13596 KiB  
Article
SMS3D: 3D Synthetic Mushroom Scenes Dataset for 3D Object Detection and Pose Estimation
by Abdollah Zakeri, Bikram Koirala, Jiming Kang, Venkatesh Balan, Weihang Zhu, Driss Benhaddou and Fatima A. Merchant
Computers 2025, 14(4), 128; https://doi.org/10.3390/computers14040128 (registering DOI) - 1 Apr 2025
Viewed by 15
Abstract
The mushroom farming industry struggles to automate harvesting due to limited large-scale annotated datasets and the complex growth patterns of mushrooms, which complicate detection, segmentation, and pose estimation. To address this, we introduce a synthetic dataset with 40,000 unique scenes of white Agaricus [...] Read more.
The mushroom farming industry struggles to automate harvesting due to limited large-scale annotated datasets and the complex growth patterns of mushrooms, which complicate detection, segmentation, and pose estimation. To address this, we introduce a synthetic dataset with 40,000 unique scenes of white Agaricus bisporus and brown baby bella mushrooms, capturing realistic variations in quantity, position, orientation, and growth stages. Our two-stage pose estimation pipeline combines 2D object detection and instance segmentation with a 3D point cloud-based pose estimation network using a Point Transformer. By employing a continuous 6D rotation representation and a geodesic loss, our method ensures precise rotation predictions. Experiments show that processing point clouds with 1024 points and the 6D Gram–Schmidt rotation representation yields optimal results, achieving an average rotational error of 1.67° on synthetic data, surpassing current state-of-the-art methods in mushroom pose estimation. The model, further, generalizes well to real-world data, attaining a mean angle difference of 3.68° on a subset of the M18K dataset with ground-truth annotations. This approach aims to drive automation in harvesting, growth monitoring, and quality assessment in the mushroom industry. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision—2nd Edition)
Show Figures

Figure 1

23 pages, 36274 KiB  
Article
An Improved Machine Learning-Based Method for Unsupervised Characterisation for Coral Reef Monitoring in Earth Observation Time-Series Data
by Zayad AlZayer, Philippa Mason, Robert Platt and Cédric M. John
Remote Sens. 2025, 17(7), 1244; https://doi.org/10.3390/rs17071244 (registering DOI) - 1 Apr 2025
Viewed by 78
Abstract
This study presents an innovative approach to automated coral reef monitoring using satellite imagery, addressing challenges in image quality assessment and correction. The method employs Principal Component Analysis (PCA) coupled with clustering for efficient image selection and quality evaluation, followed by a machine [...] Read more.
This study presents an innovative approach to automated coral reef monitoring using satellite imagery, addressing challenges in image quality assessment and correction. The method employs Principal Component Analysis (PCA) coupled with clustering for efficient image selection and quality evaluation, followed by a machine learning-based cloud removal technique using an XGBoost model trained to detect land and cloudy pixels over water. The workflow incorporates depth correction using Lyzenga’s algorithm and superpixel analysis, culminating in an unsupervised classification of reef areas using KMeans. Results demonstrate the effectiveness of this approach in producing consistent, interpretable classifications of reef ecosystems across different imaging conditions. This study highlights the potential for scalable, autonomous monitoring of coral reefs, offering valuable insights for conservation efforts and climate change impact assessment in shallow marine environments. Full article
Show Figures

Figure 1

14 pages, 549 KiB  
Article
Detecting Credit-Seeking Behavior with Programmed Instruction Framesets in a Formal Languages Course
by Yusuf Elnady, Mohammed Farghally, Mostafa Mohammed and Clifford A. Shaffer
Educ. Sci. 2025, 15(4), 439; https://doi.org/10.3390/educsci15040439 - 31 Mar 2025
Viewed by 6
Abstract
When students use an online eTextbook with content and interactive graded exercises, they often display aspects of two types of behavior: credit-seeking and knowledge-seeking. A student might behave to some degree in either or both ways with given content. In this work, we [...] Read more.
When students use an online eTextbook with content and interactive graded exercises, they often display aspects of two types of behavior: credit-seeking and knowledge-seeking. A student might behave to some degree in either or both ways with given content. In this work, we attempt to detect the degree to which either behavior takes place and investigate relationships with student performance. Our testbed is an eTextbook for teaching Formal Languages, an advanced Computer Science course. This eTextbook uses Programmed Instruction framesets (slideshows with frequent questions interspersed to keep students engaged) to deliver a significant portion of the material. We analyze session interactions to detect credit-seeking incidents in two ways. We start with an unsupervised machine learning model that clusters behavior in work sessions based on sequences of user interactions. Then, we perform a fine-grained analysis where we consider the type of each question presented within the frameset (these can be multi-choice, single-choice, or T/F questions). Our study involves 219 students, 224 framesets, and 15,521 work sessions across three semesters. We find that credit-seeking behavior is correlated with lower learning outcomes for students. We also find that the type of question is a key factor in whether students use credit-seeking behavior. The implications of our research suggest that educational software should be designed to minimize opportunities for credit-seeking behavior and promote genuine engagement with the material. Full article
(This article belongs to the Special Issue Perspectives on Computer Science Education)
Show Figures

Figure 1

22 pages, 10030 KiB  
Article
The Integration of a Multidomain Monitoring Platform with Structural Data: A Building Case Study
by Elena Candigliota, Orazio Colaneri, Laura Gioiella, Valeria Leggieri, Giuseppe Marghella, Anna Marzo, Saverio Mazzarelli, Michele Morici, Simone Murazzo, Rifat Seferi, Angelo Tatì, Concetta Tripepi and Vincenza A. M. Luprano
Sustainability 2025, 17(7), 3076; https://doi.org/10.3390/su17073076 - 31 Mar 2025
Viewed by 33
Abstract
In recent years, innovative Non-Destructive Testing (NDT) techniques, applicable for the assessment of existing civil structures, have become available for in situ analysis on Reinforced Concrete (RC) and masonry structures, but they are still not established for regular inspections, especially after seismic events. [...] Read more.
In recent years, innovative Non-Destructive Testing (NDT) techniques, applicable for the assessment of existing civil structures, have become available for in situ analysis on Reinforced Concrete (RC) and masonry structures, but they are still not established for regular inspections, especially after seismic events. The damage assessment of RC buildings after seismic events is a very relevant issue in Italy, where most of the structures built in the last 50 years are RC structures. Furthermore, there is also a growing interest in being able to monitor structural health aspects by storing them on the building’s digital twin. For these reasons, it is necessary to develop an affordable and ready-to-use NDT procedure that provides more accurate indications on the real state of damage of reinforced concrete buildings after seismic events and to integrate these data into an interoperable digital twin for automated, optimized building performance monitoring, management, and preventive maintenance. To this end, a case study was conducted on a building in the Marche region in Italy, damaged by the 2016 earthquake. Non-destructive tests were performed and inserted into the LIS platform for the creation of a digital twin of the building. This platform seamlessly manages, visualizes, and analyzes the collected data and integrates various sensor nodes deployed throughout the building. The paper also presents a methodology to simplify the work of the test operator and make the entire process of knowledge of the building faster and more sustainable through a QR-code interface. Full article
Show Figures

Figure 1

26 pages, 430 KiB  
Article
Practical Comparison Between the CI/CD Platforms Azure DevOps and GitHub
by Vladislav Manolov, Daniela Gotseva and Nikolay Hinov
Future Internet 2025, 17(4), 153; https://doi.org/10.3390/fi17040153 - 31 Mar 2025
Viewed by 53
Abstract
Continuous integration and delivery are essential for modern software development, enabling teams to automate testing, streamline deployments, and deliver high-quality software more efficiently. As DevOps adoption grows, selecting the right CI/CD platform is essential for optimizing workflows. Azure DevOps and GitHub, both under [...] Read more.
Continuous integration and delivery are essential for modern software development, enabling teams to automate testing, streamline deployments, and deliver high-quality software more efficiently. As DevOps adoption grows, selecting the right CI/CD platform is essential for optimizing workflows. Azure DevOps and GitHub, both under Microsoft, are leading solutions with distinct features and target audiences. This paper compares Azure DevOps and GitHub, evaluating their CI/CD capabilities, scalability, security, pricing, and usability. It explores their integration with cloud environments, automation workflows, and suitability for teams of varying sizes. Security features, including access controls, vulnerability scanning, and compliance, are analyzed to assess their suitability for organizational needs. Cost-effectiveness is also examined through licensing models and total ownership costs. This study leverages real-world case studies and industry trends to guide organizations in selecting the right CI/CD tools. Whether seeking a fully managed DevOps suite or a flexible, Git-native platform, understanding the strengths and limitations of Azure DevOps and GitHub is crucial for optimizing development and meeting long-term scalability goals. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities)
Show Figures

Figure 1

17 pages, 840 KiB  
Article
Enhancing Green Practice Detection in Social Media with Paraphrasing-Based Data Augmentation
by Anna Glazkova and Olga Zakharova
Big Data Cogn. Comput. 2025, 9(4), 81; https://doi.org/10.3390/bdcc9040081 - 31 Mar 2025
Viewed by 34
Abstract
Detecting mentions of green waste practices on social networks is a crucial tool for environmental monitoring and sustainability analytics. Social media serve as a valuable source of ecological information, enabling researchers to track trends, assess public engagement, and predict the spread of sustainable [...] Read more.
Detecting mentions of green waste practices on social networks is a crucial tool for environmental monitoring and sustainability analytics. Social media serve as a valuable source of ecological information, enabling researchers to track trends, assess public engagement, and predict the spread of sustainable behaviors. Automatic extraction of mentions of green waste practices facilitates large-scale analysis, but the uneven distribution of such mentions presents a challenge for effective detection. To address this, data augmentation plays a key role in balancing class distribution in green practice detection tasks. In this study, we compared existing data augmentation techniques based on the paraphrasing of original texts. We evaluated the effectiveness of additional explanations in prompts, the Chain-of-Thought prompting, synonym substitution, and text expansion. Experiments were conducted on the GreenRu dataset, which focuses on detecting mentions of green waste practices in Russian social media. Our results, obtained using two instruction-based large language models, demonstrated the effectiveness of the Chain-of-Thought prompting for text augmentation. These findings contribute to advancing sustainability analytics by improving automated detection and analysis of environmental discussions. Furthermore, the results of this study can be applied to other tasks that require augmentation of text data in the context of ecological research and beyond. Full article
Show Figures

Figure 1

13 pages, 2091 KiB  
Article
Grid-Based Software for Quantification of Diabetic Retinal Nonperfusion on Ultra-Widefield Fluorescein Angiography
by Amro Omari, Caitlyn Cooper, Eric B. Desjarlais, Maverick Cook, Maria Fernanda Abalem, Chris A. Andrews, Katherine Joltikov, Rida M. Khan, Andy Chen, Andrew DeOrio, Thomas W. Gardner, Yannis M. Paulus and K. Thiran Jayasundera
Diagnostics 2025, 15(7), 875; https://doi.org/10.3390/diagnostics15070875 - 31 Mar 2025
Viewed by 34
Abstract
Background/Objectives: Fluorescein angiography (FA) is essential for diagnosing and managing diabetic retinopathy (DR) and other retinal vascular diseases and has recently demonstrated potential as a quantitative tool for disease staging. The advent of ultra-widefield (UWF) FA, allowing visualization of the peripheral retina, enhances [...] Read more.
Background/Objectives: Fluorescein angiography (FA) is essential for diagnosing and managing diabetic retinopathy (DR) and other retinal vascular diseases and has recently demonstrated potential as a quantitative tool for disease staging. The advent of ultra-widefield (UWF) FA, allowing visualization of the peripheral retina, enhances this potential. Retinal hypoperfusion is a critical risk factor for proliferative DR, yet quantifying it reliably remains a challenge. Methods: This study evaluates the efficacy of the Michigan grid method, a software-based grading system, in detecting retinal hypoperfusion compared to the traditional freehand method. Retinal UWF fluorescein angiograms were obtained from 50 patients, including 10 with healthy retinae and 40 with non-proliferative DR. Two independent, masked graders quantified hypoperfusion in each image using two methods: freehand annotation and a new Michigan grid method. Results: Using the Michigan grid method, Grader 1 identified more ungradable segments, while Grader 2 identified more perfused and nonperfused segments. Cohen’s weighted kappa indicated substantial agreement, which was slightly higher for the entire retina (0.711) compared to the central retinal area (0.686). The Michigan grid method shows comparable or slightly improved inter-rater reliability compared to the freehand method. Conclusions: This study demonstrates a new Michigan grid method for the evaluation of FA for hypoperfusion while highlighting ongoing challenges in achieving consistent and objective retinal nonperfusion assessment, underscoring the need for further refinement and the potential integration of automated approaches. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

20 pages, 1075 KiB  
Review
Eye Tracking in Parkinson’s Disease: A Review of Oculomotor Markers and Clinical Applications
by Pierluigi Diotaiuti, Giulio Marotta, Francesco Di Siena, Salvatore Vitiello, Francesco Di Prinzio, Angelo Rodio, Tommaso Di Libero, Lavinia Falese and Stefania Mancone
Brain Sci. 2025, 15(4), 362; https://doi.org/10.3390/brainsci15040362 - 31 Mar 2025
Viewed by 72
Abstract
(1) Background. Eye movement abnormalities are increasingly recognized as early biomarkers of Parkinson’s disease (PD), reflecting both motor and cognitive dysfunction. Advances in eye-tracking technology provide objective, quantifiable measures of saccadic impairments, fixation instability, smooth pursuit deficits, and pupillary changes. These advances offer [...] Read more.
(1) Background. Eye movement abnormalities are increasingly recognized as early biomarkers of Parkinson’s disease (PD), reflecting both motor and cognitive dysfunction. Advances in eye-tracking technology provide objective, quantifiable measures of saccadic impairments, fixation instability, smooth pursuit deficits, and pupillary changes. These advances offer new opportunities for early diagnosis, disease monitoring, and neurorehabilitation. (2) Objective. This narrative review explores the relationship between oculomotor dysfunction and PD pathophysiology, highlighting the potential applications of eye tracking in clinical and research settings. (3) Methods. A comprehensive literature review was conducted, focusing on peer-reviewed studies examining eye movement dysfunction in PD. Relevant publications were identified through PubMed, Scopus, and Web of Science, using key terms, such as “eye movements in Parkinson’s disease”, “saccadic control and neurodegeneration”, “fixation instability in PD”, and “eye-tracking for cognitive assessment”. Studies integrating machine learning models and VR-based interventions were also included. (4) Results. Patients with PD exhibit distinct saccadic abnormalities, including hypometric saccades, prolonged saccadic latency, and increased anti-saccade errors. These impairments correlate with executive dysfunction and disease progression. Fixation instability and altered pupillary responses further support the role of oculomotor metrics as non-invasive biomarkers. Emerging AI-driven eye-tracking models show promise for automated PD diagnosis and progression tracking. (5) Conclusions. Eye tracking provides a reliable, cost-effective tool for early PD detection, cognitive assessment, and rehabilitation. Future research should focus on standardizing clinical protocols, validating predictive AI models, and integrating eye tracking into multimodal treatment strategies. Full article
(This article belongs to the Section Neurodegenerative Diseases)
Show Figures

Figure 1

14 pages, 272 KiB  
Article
Serum and Cerebrospinal Fluid Malondialdehyde Levels in Patients with Mild Cognitive Impairment
by Stavroula Ioannidou, Argyrios Ginoudis, Kali Makedou, Magda Tsolaki and Evgenia Lymperaki
J. Xenobiot. 2025, 15(2), 50; https://doi.org/10.3390/jox15020050 - 30 Mar 2025
Viewed by 31
Abstract
Mild cognitive impairment (MCI) is recognized as an intermediate stage between normal aging and dementia. Oxidative stress is implicated in the pathophysiology of neurodegenerative diseases, playing a crucial role. This study aimed to investigate the differences in malondialdehyde (MDA) levels in the serum [...] Read more.
Mild cognitive impairment (MCI) is recognized as an intermediate stage between normal aging and dementia. Oxidative stress is implicated in the pathophysiology of neurodegenerative diseases, playing a crucial role. This study aimed to investigate the differences in malondialdehyde (MDA) levels in the serum and cerebrospinal fluid (CSF) of patients with MCI compared to FDA-approved biomarkers, based on age, sex, and education level. Participants aged 55–90 years old were categorized into three groups based on FDA-approved biomarkers, especially the CSF Aβ42/Aβ40 ratio and clinical screening assessments: 30 MCI (A+) patients with abnormal CSF Aβ42/Aβ40 ratios (Group A), 30 MCI (A−) patients with normal CSF Aβ42/Aβ40 ratios (Group B), and 30 healthy (A−) participants with normal CSF Aβ42/Aβ40 ratios (Group C). The measurements of CSF FDA-approved biomarkers were performed using an automated immunochemical method (Fujirebio, Inc.), while MDA determination was performed using a competitive inhibition enzyme immunoassay technique (ELK Biotechnology Co., Ltd.). Our results showed that the mean CSF MDA values were significantly lower in group C than in group A (83 ng/mL vs. 130 ng/mL, p = 0.024) and group B (83 ng/mL vs. 142 ng/mL, p = 0.011), respectively. Differences in serum and CSF MDA levels were presented in the study groups based on sex, age, and education level. These findings suggest that lipid peroxidation, as indicated by CSF MDA, could serve as a potential biomarker for the early recognition of MCI. Full article
27 pages, 4371 KiB  
Systematic Review
Diagnostic Accuracy of Deep Learning for Intracranial Hemorrhage Detection in Non-Contrast Brain CT Scans: A Systematic Review and Meta-Analysis
by Armin Karamian and Ali Seifi
J. Clin. Med. 2025, 14(7), 2377; https://doi.org/10.3390/jcm14072377 - 30 Mar 2025
Viewed by 90
Abstract
Background: Intracranial hemorrhage (ICH) is a life-threatening medical condition that needs early detection and treatment. In this systematic review and meta-analysis, we aimed to update our knowledge of the performance of deep learning (DL) models in detecting ICH on non-contrast computed tomography [...] Read more.
Background: Intracranial hemorrhage (ICH) is a life-threatening medical condition that needs early detection and treatment. In this systematic review and meta-analysis, we aimed to update our knowledge of the performance of deep learning (DL) models in detecting ICH on non-contrast computed tomography (NCCT). Methods: The study protocol was registered with PROSPERO (CRD420250654071). PubMed/MEDLINE and Google Scholar databases and the reference section of included studies were searched for eligible studies. The risk of bias in the included studies was assessed using the QUADAS-2 tool. Required data was collected to calculate pooled sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with the corresponding 95% CI using the random effects model. Results: Seventy-three studies were included in our qualitative synthesis, and fifty-eight studies were selected for our meta-analysis. A pooled sensitivity of 0.92 (95% CI 0.90–0.94) and a pooled specificity of 0.94 (95% CI 0.92–0.95) were achieved. Pooled PPV was 0.84 (95% CI 0.78–0.89) and pooled NPV was 0.97 (95% CI 0.96–0.98). A bivariate model showed a pooled AUC of 0.96 (95% CI 0.95–0.97). Conclusions: This meta-analysis demonstrates that DL performs well in detecting ICH from NCCTs, highlighting a promising potential for the use of AI tools in various practice settings. More prospective studies are needed to confirm the potential clinical benefit of implementing DL-based tools and reveal the limitations of such tools for automated ICH detection and their impact on clinical workflow and outcomes of patients. Full article
(This article belongs to the Special Issue Neurocritical Care: Clinical Advances and Practice Updates)
Show Figures

Figure 1

Back to TopTop