Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (804)

Search Parameters:
Keywords = workflow automation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 5333 KiB  
Review
A Review of Standardization in Mississippi’s Multidecadal Inland Fisheries Monitoring Program
by Caleb A. Aldridge and Michael E. Colvin
Fishes 2025, 10(5), 235; https://doi.org/10.3390/fishes10050235 (registering DOI) - 18 May 2025
Abstract
Standardizing data collection, management, and analysis processes can improve the reliability and efficiency of fisheries monitoring programs, yet few studies have examined the operationalization of these tasks within agency settings. We reviewed the Mississippi Department of Wildlife, Fisheries, and Parks, Fisheries Bureau’s inland [...] Read more.
Standardizing data collection, management, and analysis processes can improve the reliability and efficiency of fisheries monitoring programs, yet few studies have examined the operationalization of these tasks within agency settings. We reviewed the Mississippi Department of Wildlife, Fisheries, and Parks, Fisheries Bureau’s inland recreational fisheries monitoring program—a 30+-year effort to standardize field protocols, data handling procedures, and automated analyses through a custom-built computer application, the Fisheries Resources Analysis System (FRAS). Drawing on quantitative summaries of sampling trends and qualitative interviews with fisheries managers, we identified key benefits, challenges, and opportunities associated with the Bureau’s standardization efforts. Standardized procedures improved sampling consistency, data reliability, and operational efficiency, enabling the long-term tracking of fish population and angler metrics across more than 270 managed waterbodies. However, challenges related to analytical transparency and spatiotemporal comparisons persist. Simulations indicated that under current conditions, 5.8, 22.9, and 37.1 years would be required to sample (boat electrofishing) 50%, 75%, and 95% of the Bureau’s waterbodies at least once, respectively; these figures should translate to other agencies, assuming similar resource availability per waterbody. The monitoring program has reduced manual processing effort and enhanced staff capacity for waterbody-specific management, yet several opportunities remain to improve efficiency and utility. These include expanding FRAS functionalities for trend visualization, integrating mobile field data entry to reduce transcription errors, linking monitoring results with management objectives, and enhancing automated report generation for management support. Strengthening these elements could not only streamline workflows but better position agencies to apply standardized data in adaptive management embedded into the monitoring program. Full article
(This article belongs to the Section Fishery Economics, Policy, and Management)
Show Figures

Figure 1

20 pages, 6167 KiB  
Article
DyEHS: An Integrated Dynamo–EPANET–Harmony Search Framework for the Optimal Design of Water Distribution Networks
by Francesco De Paola, Giuseppe Speranza, Giuseppe Ascione and Nunzio Marrone
Buildings 2025, 15(10), 1694; https://doi.org/10.3390/buildings15101694 (registering DOI) - 17 May 2025
Abstract
The integration of Building Information Modeling (BIM) with intelligent optimization techniques can significantly enhance the design efficiency of water distribution networks (WDNs). Despite this, the dynamic interoperability between BIM platforms and hydraulic simulation tools remains limited. This study introduces DyEHS (Dynamo–EPANET–Harmony Search), a [...] Read more.
The integration of Building Information Modeling (BIM) with intelligent optimization techniques can significantly enhance the design efficiency of water distribution networks (WDNs). Despite this, the dynamic interoperability between BIM platforms and hydraulic simulation tools remains limited. This study introduces DyEHS (Dynamo–EPANET–Harmony Search), a novel workflow integrating Autodesk Civil 3D, EPANET, and Harmony Search via Dynamo, to address this gap. DyEHS enables the automated optimization of pipe diameters and network layouts, aiming to minimize capital costs while satisfying hydraulic constraints. In a real-world case study, DyEHS achieved a 15% reduction in the total pipe network costs compared to traditional uniform-diameter designs, while ensuring that all nodes maintained a minimum pressure of 25 m. This quantifiable improvement highlights the tool’s potential for practical engineering applications, offering a robust, adaptable, and fully integrated BIM-based solution for WDN design. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

20 pages, 2736 KiB  
Article
Clinical Validation and Post-Implementation Performance Monitoring of a Neural Network-Assisted Approach for Detecting Chronic Lymphocytic Leukemia Minimal Residual Disease by Flow Cytometry
by Jansen N. Seheult, Gregory E. Otteson, Matthew J. Weybright, Michael M. Timm, Wenchao Han, Dragan Jevremovic, Pedro Horna, Horatiu Olteanu and Min Shi
Cancers 2025, 17(10), 1688; https://doi.org/10.3390/cancers17101688 (registering DOI) - 17 May 2025
Abstract
Background: Flow cytometric detection of minimal residual disease (MRD) in chronic lymphocytic leukemia (CLL) is complex, time-consuming, and subject to inter-operator variability. Deep neural networks (DNNs) offer potential for standardization and efficiency improvement, but require rigorous validation and monitoring for safe clinical [...] Read more.
Background: Flow cytometric detection of minimal residual disease (MRD) in chronic lymphocytic leukemia (CLL) is complex, time-consuming, and subject to inter-operator variability. Deep neural networks (DNNs) offer potential for standardization and efficiency improvement, but require rigorous validation and monitoring for safe clinical implementation. Methods: We evaluated a DNN-assisted human-in-the-loop approach for CLL MRD detection. Initial validation included method comparison against manual analysis (n = 240), precision studies, and analytical sensitivity verification. Post-implementation monitoring comprised four components: daily electronic quality control, input data drift detection, error analysis, and attribute acceptance sampling. Laboratory efficiency was assessed through a timing study of 161 cases analyzed by five technologists. Results: Method comparison demonstrated 97.5% concordance with manual analysis for qualitative classification (sensitivity 100%, specificity 95%) and excellent correlation for quantitative assessment (r = 0.99, Deming slope = 0.99). Precision studies confirmed high repeatability and within-laboratory precision across multiple operators. Analytical sensitivity was verified at 0.002% MRD. Post-implementation monitoring identified 2.97% of cases (26/874) with input data drift, primarily high-burden CLL and non-CLL neoplasms. Error analysis showed the DNN alone achieved 97% sensitivity compared to human-in-the-loop-reviewed results, with 13 missed cases (1.5%) showing atypical immunophenotypes. Attribute acceptance sampling confirmed 98.8% of reported negative cases were true negatives. The DNN-assisted workflow reduced average analysis time by 60.3% compared to manual analysis (4.2 ± 2.3 vs. 10.5 ± 5.8 min). Conclusions: The implementation of a DNN-assisted approach for CLL MRD detection in a clinical laboratory provides diagnostic performance equivalent to expert manual analysis while substantially reducing analysis time. Comprehensive performance monitoring ensures ongoing safety and effectiveness in routine clinical practice. This approach provides a model for responsible AI integration in clinical laboratories, balancing automation benefits with expert oversight. Full article
Show Figures

Figure 1

23 pages, 2423 KiB  
Article
Comparative Study of Cell Nuclei Segmentation Based on Computational and Handcrafted Features Using Machine Learning Algorithms
by Rashadul Islam Sumon, Md Ariful Islam Mozumdar, Salma Akter, Shah Muhammad Imtiyaj Uddin, Mohammad Hassan Ali Al-Onaizan, Reem Ibrahim Alkanhel and Mohammed Saleh Ali Muthanna
Diagnostics 2025, 15(10), 1271; https://doi.org/10.3390/diagnostics15101271 - 16 May 2025
Abstract
Background: Nuclei segmentation is the first stage of automated microscopic image analysis. The cell nucleus is a crucial aspect in segmenting to gain more insight into cell characteristics and functions that enable computer-aided pathology for early disease detection, such as prostate cancer, breast [...] Read more.
Background: Nuclei segmentation is the first stage of automated microscopic image analysis. The cell nucleus is a crucial aspect in segmenting to gain more insight into cell characteristics and functions that enable computer-aided pathology for early disease detection, such as prostate cancer, breast cancer, brain tumors, and other diagnoses. Nucleus segmentation remains a challenging task despite significant advancements in automated methods. Traditional techniques, such as Otsu thresholding and watershed approaches, are ineffective in challenging scenarios. However, deep learning-based methods exhibit remarkable results across various biological imaging modalities, including computational pathology. Methods: This work explores machine learning approaches for nuclei segmentation by evaluating the quality of nuclei image segmentation. We employed several methods, including K-means clustering, Random Forest (RF), Support Vector Machine (SVM) with handcrafted features, and Logistic Regression (LR) using features derived from Convolutional Neural Networks (CNNs). Handcrafted features extract attributes like the shape, texture, and intensity of nuclei and are meticulously developed based on specialized knowledge. Conversely, CNN-based features are automatically acquired representations that identify complex patterns in nuclei images. To assess how effectively these techniques segment cell nuclei, their performance is evaluated. Results: Experimental results show that Logistic Regression based on CNN-derived features outperforms the other techniques, achieving an accuracy of 96.90%, a Dice coefficient of 74.24, and a Jaccard coefficient of 55.61. In contrast, the Random Forest, Support Vector Machine, and K-means algorithms yielded lower segmentation performance metrics. Conclusions: The conclusions suggest that leveraging CNN-based features in conjunction with Logistic Regression significantly enhances the accuracy of cell nuclei segmentation in pathological images. This approach holds promise for refining computer-aided pathology workflows, potentially leading to more reliable and earlier disease diagnoses. Full article
(This article belongs to the Special Issue Diagnostic Imaging of Prostate Cancer)
45 pages, 14000 KiB  
Article
Automated Eye Disease Diagnosis Using a 2D CNN with Grad-CAM: High-Accuracy Detection of Retinal Asymmetries for Multiclass Classification
by Sameh Abd El-Ghany, Mahmood A. Mahmood and A. A. Abd El-Aziz
Symmetry 2025, 17(5), 768; https://doi.org/10.3390/sym17050768 - 15 May 2025
Abstract
Eye diseases (EDs), including glaucoma, diabetic retinopathy, and cataracts, are major contributors to vision loss and reduced quality of life worldwide. These conditions not only affect millions of individuals but also impose a significant burden on global healthcare systems. As the population ages [...] Read more.
Eye diseases (EDs), including glaucoma, diabetic retinopathy, and cataracts, are major contributors to vision loss and reduced quality of life worldwide. These conditions not only affect millions of individuals but also impose a significant burden on global healthcare systems. As the population ages and lifestyle changes increase the prevalence of conditions like diabetes, the incidence of EDs is expected to rise, further straining diagnostic and treatment resources. Timely and accurate diagnosis is critical for effective management and prevention of vision loss, as early intervention can significantly slow disease progression and improve patient outcomes. However, traditional diagnostic methods rely heavily on manual analysis of fundus imaging, which is labor-intensive, time-consuming, and subject to human error. This underscores the urgent need for automated, efficient, and accurate diagnostic systems that can handle the growing demand while maintaining high diagnostic standards. Current approaches, while advancing, still face challenges such as inefficiency, susceptibility to errors, and limited ability to detect subtle retinal asymmetries, which are critical early indicators of disease. Effective solutions must address these issues while ensuring high accuracy, interpretability, and scalability. This research introduces a 2D single-channel convolutional neural network (CNN) based on ResNet101-V2 architecture. The model integrates gradient-weighted class activation mapping (Grad-CAM) to highlight retinal asymmetries linked to EDs, thereby enhancing interpretability and detection precision. Evaluated on retinal Optical Coherence Tomography (OCT) datasets for multiclass classification tasks, the model demonstrated exceptional performance, achieving accuracy rates of 99.90% for four-class tasks and 99.27% for eight-class tasks. By leveraging patterns of retinal symmetry and asymmetry, the proposed model improves early detection and simplifies the diagnostic workflow, offering a promising advancement in the field of automated eye disease diagnosis. Full article
Show Figures

Figure 1

35 pages, 10924 KiB  
Article
Winding Fault Detection in Power Transformers Based on Support Vector Machine and Discrete Wavelet Transform Approach
by Bonginkosi A. Thango
Technologies 2025, 13(5), 200; https://doi.org/10.3390/technologies13050200 - 14 May 2025
Viewed by 68
Abstract
Transformer winding faults (TWFs) can lead to insulation breakdown, internal short circuits, and catastrophic transformer failure. Due to their low current magnitude—particularly at early stages such as inter-turn short circuits, axial or radial displacement, or winding looseness—TWFs often induce minimal impedance changes and [...] Read more.
Transformer winding faults (TWFs) can lead to insulation breakdown, internal short circuits, and catastrophic transformer failure. Due to their low current magnitude—particularly at early stages such as inter-turn short circuits, axial or radial displacement, or winding looseness—TWFs often induce minimal impedance changes and generate fault currents that remain within normal operating thresholds. As a result, conventional protection schemes like overcurrent relays, which are tuned for high-magnitude faults, fail to detect such internal anomalies. Moreover, frequency response deviations caused by TWFs often resemble those introduced by routine phenomena such as tap changer operations, load variation, or core saturation, making accurate diagnosis difficult using traditional FRA interpretation techniques. This paper presents a novel diagnostic framework combining Discrete Wavelet Transform (DWT) and Support Vector Machine (SVM) classification to improve the detection of TWFs. The proposed system employs region-based statistical deviation labeling to enhance interpretability across five well-defined frequency bands. It is validated on five real FRA datasets obtained from operating transformers in Gauteng Province, South Africa, covering a range of MVA ratings and configurations, thereby confirming model transferability. The system supports post-processing but is lightweight enough for near real-time diagnostic use, with average execution time under 12 s per case on standard hardware. A custom graphical user interface (GUI), developed in MATLAB R2022a, automates the diagnostic workflow—including region identification, wavelet-based decomposition visualization, and PDF report generation. The complete framework is released as an open-access toolbox for transformer condition monitoring and predictive maintenance. Full article
Show Figures

Figure 1

12 pages, 905 KiB  
Article
Radiological Reporting of Brain Atrophy in MRI: Real-Life Comparison Between Narrative Reports, Semiquantitative Scales and Automated Software-Based Volumetry
by Federico Bruno, Cristina Fagotti, Gaspare Saltarelli, Giovanni Di Cerbo, Alessandra Sabatelli, Claudia De Felici, Antonio Innocenzi, Ernesto Di Cesare and Alessandra Splendiani
Diagnostics 2025, 15(10), 1246; https://doi.org/10.3390/diagnostics15101246 - 14 May 2025
Viewed by 129
Abstract
Background: Accurate assessment of brain atrophy is essential in the diagnosis and monitoring of brain aging and neurodegenerative disorders. Radiological methods range from narrative reporting to semi-quantitative visual rating scales (VRSs) and fully automated volumetric software. However, their integration and consistency in [...] Read more.
Background: Accurate assessment of brain atrophy is essential in the diagnosis and monitoring of brain aging and neurodegenerative disorders. Radiological methods range from narrative reporting to semi-quantitative visual rating scales (VRSs) and fully automated volumetric software. However, their integration and consistency in clinical practice remain limited. Methods: In this retrospective study, brain MRI images of 43 patients were evaluated. Brain atrophy was assessed by extrapolating findings from narrative radiology reports, three validated VRSs (MTA, Koedam, Pasquier), and Pixyl.Neuro.BV, a commercially available volumetric software platform. Agreement between methods was assessed using intraclass correlation coefficients (ICCs), Cohen’s kappa, Spearman’s correlation, and McNemar tests. Results: Moderate correlation was found between narrative reports and VRSs (ρ = 0.55–0.69), but categorical agreement was limited (kappa = 0.21–0.30). Visual scales underestimated atrophy relative to software (mean scores: VRSs = 0.196; software = 0.279), while reports tended to overestimate. Agreement between VRSs and software was poor (kappa = 0.14–0.33), though MTA showed a significant correlation with hippocampal volume. Agreement between reports and software was lowest for global atrophy. Conclusions: Narrative reports, while common in practice, show low consistency with structured scales and quantitative software, especially in subtle cases. VRSs improve standardization but remain subjective and less sensitive. Integrating structured scales and volumetric tools into clinical workflows may enhance diagnostic accuracy and consistency in dementia imaging. Full article
(This article belongs to the Special Issue An Update on Radiological Diagnosis in 2024)
Show Figures

Figure 1

19 pages, 3724 KiB  
Article
SYNCode: Synergistic Human–LLM Collaboration for Enhanced Data Annotation in Stack Overflow
by Meng Xia, Shradha Maharjan, Tammy Le, Will Taylor and Myoungkyu Song
Information 2025, 16(5), 392; https://doi.org/10.3390/info16050392 - 9 May 2025
Viewed by 296
Abstract
Large language models (LLMs) have rapidly advanced natural language processing, showcasing remarkable effectiveness as automated annotators across various applications. Despite their potential to significantly reduce annotation costs and expedite workflows, annotations produced solely by LLMs can suffer from inaccuracies and inherent biases, highlighting [...] Read more.
Large language models (LLMs) have rapidly advanced natural language processing, showcasing remarkable effectiveness as automated annotators across various applications. Despite their potential to significantly reduce annotation costs and expedite workflows, annotations produced solely by LLMs can suffer from inaccuracies and inherent biases, highlighting the necessity of maintaining human oversight. In this article, we present a synergistic human–LLM collaboration approach for data annotation enhancement (SYNCode). This framework is designed explicitly to facilitate collaboration between humans and LLMs for annotating complex, code-centric datasets such as Stack Overflow. The proposed approach involves an integrated pipeline that initially employs TF-IDF analysis for quick identification of relevant textual elements. Subsequently, we leverage advanced transformer-based models, specifically NLP Transformer and UniXcoder, to capture nuanced semantic contexts and code structures, generating more accurate preliminary annotations. Human annotators then engage in iterative refinement, validating and adjusting annotations to enhance accuracy and mitigate biases introduced during automated labeling. To operationalize this synergistic workflow, we developed the SYNCode prototype, featuring an interactive graphical interface that supports real-time collaborative annotation between humans and LLMs. This enables annotators to iteratively refine and validate automated suggestions effectively. Our integrated human–LLM collaborative methodology demonstrates considerable promise in achieving high-quality, reliable annotations, particularly for domain-specific and technically demanding datasets, thereby enhancing downstream tasks in software engineering and natural language processing. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Graphical abstract

26 pages, 17330 KiB  
Article
Research on Automated On-Site Construction of Timber Structures: Mobile Construction Platform Guided by Real-Time Visual Positioning System
by Kang Bi, Xinyu Shi, Da Wan, Haining Zhou, Wenxuan Zhao, Chengpeng Sun, Peng Du and Hiroatsu Fukuda
Buildings 2025, 15(10), 1594; https://doi.org/10.3390/buildings15101594 - 8 May 2025
Viewed by 274
Abstract
In recent years, the AEC industry has increasingly sought sustainable solutions to enhance productivity and reduce environmental pollution, with wood emerging as a key renewable material due to its excellent carbon sequestration capability and low ecological footprint. Despite significant advances in digital fabrication [...] Read more.
In recent years, the AEC industry has increasingly sought sustainable solutions to enhance productivity and reduce environmental pollution, with wood emerging as a key renewable material due to its excellent carbon sequestration capability and low ecological footprint. Despite significant advances in digital fabrication technologies for timber construction, on-site assembly still predominantly relies on manual operations, thereby limiting efficiency and precision. To address this challenge, this study proposes an automated on-site timber construction process that integrates a mobile construction platform (MCP), a fiducial marker system (FMS) and a UWB/IMU integrated navigation system. By deconstructing traditional modular stacking methods and iteratively developing the process in a controlled laboratory environment, the authors formalize raw construction experience into an effective workflow, supplemented by a self-feedback error correction system to achieve precise, real-time end-effector positioning. Extensive experimental results demonstrate that the system consistently achieves millimeter-level positioning accuracy across all test scenarios, with translational errors of approximately 1 mm and an average repeat positioning precision of up to 0.08 mm, thereby aligning with on-site timber construction requirements. These findings validate the method’s technical reliability, robustness and practical applicability, laying a solid foundation for a smooth transition from laboratory trials to large-scale on-site timber construction. Full article
Show Figures

Figure 1

16 pages, 3560 KiB  
Article
Year-Round Acoustic Presence of Beaked Whales (Ziphiidae) Far Offshore off Australia’s Northwest Shelf
by Evgenii Sidenko, Iain Parnum, Alexander Gavrilov, Robert McCauley and Christine Erbe
J. Mar. Sci. Eng. 2025, 13(5), 927; https://doi.org/10.3390/jmse13050927 - 8 May 2025
Viewed by 570
Abstract
Beaked whales are a cryptic pelagic species, rarely sighted at sea. In a ~2.5-year passive acoustic monitoring program on Australia’s Northwest Shelf, a variety of marine mammal sounds were detected, including beaked whale (Ziphiidae) clicks. An automatic detection routine for beaked whale clicks [...] Read more.
Beaked whales are a cryptic pelagic species, rarely sighted at sea. In a ~2.5-year passive acoustic monitoring program on Australia’s Northwest Shelf, a variety of marine mammal sounds were detected, including beaked whale (Ziphiidae) clicks. An automatic detection routine for beaked whale clicks was developed, tested, and run on these recordings. The detection workflow included: (1) the extraction of impulsive signals from passive acoustic recordings based on an auto-regression model, (2) the calculation of a set of features of extracted signals, and (3) binary signal classification based on these features. Detector performance (Precision, Recall, and F1-score) was assessed using a manually annotated dataset of extracted clicks. This automated routine allows for quick analysis of animal (acoustic) presence and distribution spatially and temporally. In our study, beaked whales were present all year round at six deep-water (>1000 m) sites, but no clicks were detected at the shallow-water (~70 m) site. No seasonal or diurnal patterns of beaked whale clicks were identified. Full article
Show Figures

Figure 1

16 pages, 2816 KiB  
Review
Artificial General Intelligence (AGI) Applications and Prospect in Oil and Gas Reservoir Development
by Jiulong Wang, Xiaotian Luo, Xuhui Zhang and Shuyi Du
Processes 2025, 13(5), 1413; https://doi.org/10.3390/pr13051413 - 6 May 2025
Viewed by 334
Abstract
The cornerstone of the global economy, oil and gas reservoir development, faces numerous challenges such as resource depletion, operational inefficiencies, safety concerns, and environmental impacts. In recent years, the integration of artificial intelligence (AI), particularly artificial general intelligence (AGI), has gained significant attention [...] Read more.
The cornerstone of the global economy, oil and gas reservoir development, faces numerous challenges such as resource depletion, operational inefficiencies, safety concerns, and environmental impacts. In recent years, the integration of artificial intelligence (AI), particularly artificial general intelligence (AGI), has gained significant attention for its potential to address these challenges. This review explores the current state of AGI applications in the oil and gas sector, focusing on key areas such as data analysis, optimized decision and knowledge management, etc. AGIs, leveraging vast datasets and advanced retrieval-augmented generation (RAG) capabilities, have demonstrated remarkable success in automating data-driven decision-making processes, enhancing predictive analytics, and optimizing operational workflows. In exploration, AGIs assist in interpreting seismic data and geophysical surveys, providing insights into subsurface reservoirs with higher accuracy. During production, AGIs enable real-time analysis of operational data, predicting equipment failures, optimizing drilling parameters, and increasing production efficiency. Despite the promising applications, several challenges remain, including data quality, model interpretability, and the need for high-performance computing resources. This paper also discusses the future prospects of AGI in oil and gas reservoir development, highlighting the potential for multi-modal AI systems, which combine textual, numerical, and visual data to further enhance decision-making processes. In conclusion, AGIs have the potential to revolutionize oil and gas reservoir development by driving automation, enhancing operational efficiency, and improving safety. However, overcoming existing technical and organizational challenges will be essential for realizing the full potential of AI in this sector. Full article
Show Figures

Figure 1

21 pages, 2806 KiB  
Article
A Computer-Aided Approach to Canine Hip Dysplasia Assessment: Measuring Femoral Head–Acetabulum Distance with Deep Learning
by Pedro Franco-Gonçalo, Pedro Leite, Sofia Alves-Pimenta, Bruno Colaço, Lio Gonçalves, Vítor Filipe, Fintan McEvoy, Manuel Ferreira and Mário Ginja
Appl. Sci. 2025, 15(9), 5087; https://doi.org/10.3390/app15095087 - 3 May 2025
Viewed by 239
Abstract
Canine hip dysplasia (CHD) screening relies on radiographic assessment, but traditional scoring methods often lack consistency due to inter-rater variability. This study presents an AI-driven system for automated measurement of the femoral head center to dorsal acetabular edge (FHC/DAE) distance, a key metric [...] Read more.
Canine hip dysplasia (CHD) screening relies on radiographic assessment, but traditional scoring methods often lack consistency due to inter-rater variability. This study presents an AI-driven system for automated measurement of the femoral head center to dorsal acetabular edge (FHC/DAE) distance, a key metric in CHD evaluation. Unlike most AI models that directly classify CHD severity using convolutional neural networks, this system provides an interpretable, measurement-based output to support a more transparent evaluation. The system combines a keypoint regression model for femoral head center localization with a U-Net-based segmentation model for acetabular edge delineation. It was trained on 7967 images for hip joint detection, 571 for keypoints, and 624 for acetabulum segmentation, all from ventrodorsal hip-extended radiographs. On a test set of 70 images, the keypoint model achieved high precision (Euclidean Distance = 0.055 mm; Mean Absolute Error = 0.0034 mm; Mean Squared Error = 2.52 × 10−5 mm2), while the segmentation model showed strong performance (Dice Score = 0.96; Intersection over Union = 0.92). Comparison with expert annotations demonstrated strong agreement (Intraclass Correlation Coefficients = 0.97 and 0.93; Weighted Kappa = 0.86 and 0.79; Standard Error of Measurement = 0.92 to 1.34 mm). By automating anatomical landmark detection, the system enhances standardization, reproducibility, and interpretability in CHD radiographic assessment. Its strong alignment with expert evaluations supports its integration into CHD screening workflows for more objective and efficient diagnosis and CHD scoring. Full article
(This article belongs to the Special Issue Research on Machine Learning in Computer Vision)
Show Figures

Figure 1

28 pages, 7155 KiB  
Review
Accelerating Biologics PBPK Modelling with Automated Model Building: A Tutorial
by Abdallah Derbalah, Tariq Abdulla, Mailys De Sousa Mendes, Qier Wu, Felix Stader, Masoud Jamei, Iain Gardner and Armin Sepp
Pharmaceutics 2025, 17(5), 604; https://doi.org/10.3390/pharmaceutics17050604 - 2 May 2025
Viewed by 672
Abstract
Physiologically based pharmacokinetic (PBPK) modelling for biologics, such as monoclonal antibodies and therapeutic proteins, involves capturing complex processes, including target-mediated drug disposition (TMDD), FcRn-mediated recycling, and tissue-specific distribution. The Simcyp Designer Biologics PBPK Platform Model offers an intuitive and efficient platform for constructing [...] Read more.
Physiologically based pharmacokinetic (PBPK) modelling for biologics, such as monoclonal antibodies and therapeutic proteins, involves capturing complex processes, including target-mediated drug disposition (TMDD), FcRn-mediated recycling, and tissue-specific distribution. The Simcyp Designer Biologics PBPK Platform Model offers an intuitive and efficient platform for constructing mechanistic PBPK models with pre-defined templates and automated model assembly, reducing manual input and improving reproducibility. This tutorial provides a step-by-step guide to using the platform, highlighting features such as cross-species scaling, population variability simulations, and flexibility for model customization. Practical case studies demonstrate the platform’s capability to streamline workflows, enabling rapid, mechanistic model development to address key questions in biologics drug development. By automating critical processes, this tool enhances decision-making in translational research, optimizing the modelling and simulation of large molecules across discovery and clinical stages. Full article
Show Figures

Figure 1

18 pages, 1221 KiB  
Technical Note
swmm_api: A Python Package for Automation, Customization, and Visualization in SWMM-Based Urban Drainage Modeling
by Markus Pichler
Water 2025, 17(9), 1373; https://doi.org/10.3390/w17091373 - 1 May 2025
Viewed by 410
Abstract
The Python package swmm_api addresses a critical gap in urban drainage modeling by providing a flexible, script-based tool for managing SWMM models. Recognizing the limitations of existing solutions, this study developed a Python-based approach that seamlessly integrates SWMM model creation, editing, analysis, and [...] Read more.
The Python package swmm_api addresses a critical gap in urban drainage modeling by providing a flexible, script-based tool for managing SWMM models. Recognizing the limitations of existing solutions, this study developed a Python-based approach that seamlessly integrates SWMM model creation, editing, analysis, and visualization within Python’s extensive ecosystem. The package offers intuitive, dictionary-like interactions with model components, enabling manipulation of input files and extraction of results as structured data. It supports advanced GIS integration, sensitivity analysis, calibration, and uncertainty estimation through libraries like GeoPandas, SALib, and SPOTPY. Results demonstrate significant efficiency improvements in repetitive tasks, including batch simulations, sensitivity analyses, and automated GIS data processing, exemplified by practical applications such as model updates for municipal sewer systems. The package significantly enhances reproducibility and facilitates transparent sharing of scientific workflows. Overall, swmm_api provides researchers and practitioners with a robust, adaptable solution for streamlined urban drainage modeling. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
Show Figures

Graphical abstract

38 pages, 1484 KiB  
Review
Enhancing Radiologist Productivity with Artificial Intelligence in Magnetic Resonance Imaging (MRI): A Narrative Review
by Arun Nair, Wilson Ong, Aric Lee, Naomi Wenxin Leow, Andrew Makmur, Yong Han Ting, You Jun Lee, Shao Jin Ong, Jonathan Jiong Hao Tan, Naresh Kumar and James Thomas Patrick Decourcy Hallinan
Diagnostics 2025, 15(9), 1146; https://doi.org/10.3390/diagnostics15091146 - 30 Apr 2025
Viewed by 590
Abstract
Artificial intelligence (AI) shows promise in streamlining MRI workflows by reducing radiologists’ workload and improving diagnostic accuracy. Despite MRI’s extensive clinical use, systematic evaluation of AI-driven productivity gains in MRI remains limited. This review addresses that gap by synthesizing evidence on how AI [...] Read more.
Artificial intelligence (AI) shows promise in streamlining MRI workflows by reducing radiologists’ workload and improving diagnostic accuracy. Despite MRI’s extensive clinical use, systematic evaluation of AI-driven productivity gains in MRI remains limited. This review addresses that gap by synthesizing evidence on how AI can shorten scanning and reading times, optimize worklist triage, and automate segmentation. On 15 November 2024, we searched PubMed, EMBASE, MEDLINE, Web of Science, Google Scholar, and Cochrane Library for English-language studies published between 2000 and 15 November 2024, focusing on AI applications in MRI. Additional searches of grey literature were conducted. After screening for relevance and full-text review, 67 studies met inclusion criteria. Extracted data included study design, AI techniques, and productivity-related outcomes such as time savings and diagnostic accuracy. The included studies were categorized into five themes: reducing scan times, automating segmentation, optimizing workflow, decreasing reading times, and general time-saving or workload reduction. Convolutional neural networks (CNNs), especially architectures like ResNet and U-Net, were commonly used for tasks ranging from segmentation to automated reporting. A few studies also explored machine learning-based automation software and, more recently, large language models. Although most demonstrated gains in efficiency and accuracy, limited external validation and dataset heterogeneity could reduce broader adoption. AI applications in MRI offer potential to enhance radiologist productivity, mainly through accelerated scans, automated segmentation, and streamlined workflows. Further research, including prospective validation and standardized metrics, is needed to enable safe, efficient, and equitable deployment of AI tools in clinical MRI practice. Full article
(This article belongs to the Special Issue Deep Learning in Medical Image Segmentation and Diagnosis)
Show Figures

Figure 1

Back to TopTop