Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (44,048)

Search Parameters:
Keywords = imaging performance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 10851 KB  
Article
Evaluating Feature-Based Homography Pipelines for Dual-Camera Registration in Acupoint Annotation
by Thathsara Nanayakkara, Hadi Sedigh Malekroodi, Jaeuk Sul, Chang-Su Na, Myunggi Yi and Byeong-il Lee
J. Imaging 2025, 11(11), 388; https://doi.org/10.3390/jimaging11110388 (registering DOI) - 1 Nov 2025
Abstract
Reliable acupoint localization is essential for developing artificial intelligence (AI) and extended reality (XR) tools in traditional Korean medicine; however, conventional annotation of 2D images often suffers from inter- and intra-annotator variability. This study presents a low-cost dual-camera imaging system that fuses infrared [...] Read more.
Reliable acupoint localization is essential for developing artificial intelligence (AI) and extended reality (XR) tools in traditional Korean medicine; however, conventional annotation of 2D images often suffers from inter- and intra-annotator variability. This study presents a low-cost dual-camera imaging system that fuses infrared (IR) and RGB views on a Raspberry Pi 5 platform, incorporating an IR ink pen in conjunction with a 780 nm emitter array to standardize point visibility. Among the tested marking materials, the IR ink showed the highest contrast and visibility under IR illumination, making it the most suitable for acupoint detection. Five feature detectors (SIFT, ORB, KAZE, AKAZE, and BRISK) were evaluated with two matchers (FLANN and BF) to construct representative homography pipelines. Comparative evaluations across multiple camera-to-surface distances revealed that KAZE + FLANN achieved the lowest mean 2D Error (1.17 ± 0.70 px) and the lowest mean aspect-aware error (0.08 ± 0.05%) while remaining computationally feasible on the Raspberry Pi 5. In hand-image experiments across multiple postures, the dual-camera registration maintained a mean 2D error below ~3 px and a mean aspect-aware error below ~0.25%, confirming stable and reproducible performance. The proposed framework provides a practical foundation for generating high-quality acupoint datasets, supporting future AI-based localization, XR integration, and automated acupuncture-education systems. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

17 pages, 5329 KB  
Case Report
Asymmetry Management During 3D-Guided Piezocorticotomy-Assisted MARPE Treatment with Direct Printed Aligners: Case Report
by Svitlana Koval, Viktoriia Kolesnyk and Daria Chepanova
J. Clin. Med. 2025, 14(21), 7773; https://doi.org/10.3390/jcm14217773 (registering DOI) - 1 Nov 2025
Abstract
Background: Midpalatal suture expansion is effective in both growing and adult patients, and Miniscrew-Assisted Rapid Palatal Expansion (MARPE) provides greater skeletal effects and fewer dentoalveolar side effects than traditional expanders. However, asymmetric expansion remains a challenge, often influenced by pre-existing craniofacial asymmetries, appliance [...] Read more.
Background: Midpalatal suture expansion is effective in both growing and adult patients, and Miniscrew-Assisted Rapid Palatal Expansion (MARPE) provides greater skeletal effects and fewer dentoalveolar side effects than traditional expanders. However, asymmetric expansion remains a challenge, often influenced by pre-existing craniofacial asymmetries, appliance design, and suture morphology. In this case report, we describe asymmetric expansion with 3D-guided piezocorticotomy-assisted MARPE and its management with directly printed aligners (DPAs). Methods: A patient with facial asymmetry, a narrow maxillary arch, and multiple dentoalveolar deformities underwent pre-treatment evaluation, including root inclination analysis and CBCT imaging. A MARPE appliance with 3D-guided piezocorticotomy assistance was applied; post-expansion orthodontic treatment was digitally planned and performed with directly printed aligners. Results: During MARPE activation, asymmetric midpalatal suture disarticulation was observed, with greater displacement on the left side due to jackscrew orientation and root proximity. Post-expansion orthodontic correction with DPAs allowed precise root positioning, spatial redistribution, and improved occlusal symmetry. Over 20 months, significant improvements were achieved in midline orientation, axial root inclination, and transverse arch coordination. Conclusions: The reported case underscores the importance of pre-treatment evaluation for asymmetries and careful appliance design in MARPE protocols; in addition, it demonstrates that directly printed aligners, supported by digital planning, can provide accurate and efficient dentoalveolar correction following asymmetric expansion. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
Show Figures

Figure 1

19 pages, 5704 KB  
Article
Rapid and Non-Destructive Assessment of Eight Essential Amino Acids in Foxtail Millet: Development of an Efficient and Accurate Detection Model Based on Near-Infrared Hyperspectral
by Anqi Gao, Xiaofu Wang, Erhu Guo, Dongxu Zhang, Kai Cheng, Xiaoguang Yan, Guoliang Wang and Aiying Zhang
Foods 2025, 14(21), 3760; https://doi.org/10.3390/foods14213760 (registering DOI) - 1 Nov 2025
Abstract
Foxtail millet is a vital grain whose amino acid content affects nutritional quality. Traditional detection methods are destructive, time-consuming, and inefficient. This work established a rapid and non-destructive method for detecting essential amino acids in the foxtail millet. To address these limitations, this [...] Read more.
Foxtail millet is a vital grain whose amino acid content affects nutritional quality. Traditional detection methods are destructive, time-consuming, and inefficient. This work established a rapid and non-destructive method for detecting essential amino acids in the foxtail millet. To address these limitations, this study developed a rapid, non-destructive approach for quantifying eight essential amino acids—lysine, phenylalanine, methionine, threonine, isoleucine, leucine, valine, and histidine—in foxtail millet (variety: Changnong No. 47) using near-infrared hyperspectral imaging. A total of 217 samples were collected and used for model development. The spectral data were preprocessed using Savitzky–Golay, adaptive iteratively reweighted penalized least squares, and standard normal variate. The key wavelengths were extracted using the competitive adaptive reweighted sampling algorithm, and four regression models—Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), Convolutional Neural Network (CNN), and Bidirectional Long Short-Term Memory (BiLSTM)—were constructed. The results showed that the key wavelengths selected by CARS account for only 2.03–4.73% of the full spectrum. BiLSTM was most suitable for modeling lysine (R2 = 0.5862, RMSE = 0.0081, RPD = 1.6417). CNN demonstrated the best performance for phenylalanine, methionine, isoleucine, and leucine. SVR was most effective for predicting threonine (R2 = 0.8037, RMSE = 0.0090, RPD = 2.2570), valine, and histidine. This study offers an effective novel approach for intelligent quality assessment of grains. Full article
Show Figures

Figure 1

26 pages, 15315 KB  
Article
Machine and Deep Learning Framework for Sargassum Detection and Fractional Cover Estimation Using Multi-Sensor Satellite Imagery
by José Manuel Echevarría-Rubio, Guillermo Martínez-Flores and Rubén Antelmo Morales-Pérez
Data 2025, 10(11), 177; https://doi.org/10.3390/data10110177 (registering DOI) - 1 Nov 2025
Abstract
Over the past decade, recurring influxes of pelagic Sargassum have posed significant environmental and economic challenges in the Caribbean Sea. Effective monitoring is crucial for understanding bloom dynamics and mitigating their impacts. This study presents a comprehensive machine learning (ML) and deep learning [...] Read more.
Over the past decade, recurring influxes of pelagic Sargassum have posed significant environmental and economic challenges in the Caribbean Sea. Effective monitoring is crucial for understanding bloom dynamics and mitigating their impacts. This study presents a comprehensive machine learning (ML) and deep learning (DL) framework for detecting Sargassum and estimating its fractional cover using imagery from key satellite sensors: the Operational Land Imager (OLI) on Landsat-8 and the Multispectral Instrument (MSI) on Sentinel-2. A spectral library was constructed from five core spectral bands (Blue, Green, Red, Near-Infrared, and Short-Wave Infrared). It was used to train an ensemble of five diverse classifiers: Random Forest (RF), K-Nearest Neighbors (KNN), XGBoost (XGB), a Multi-Layer Perceptron (MLP), and a 1D Convolutional Neural Network (1D-CNN). All models achieved high classification performance on a held-out test set, with weighted F1-scores exceeding 0.976. The probabilistic outputs from these classifiers were then leveraged as a direct proxy for the sub-pixel fractional cover of Sargassum. Critically, an inter-algorithm agreement analysis revealed that detections on real-world imagery are typically either of very high (unanimous) or very low (contentious) confidence, highlighting the diagnostic power of the ensemble approach. The resulting framework provides a robust and quantitative pathway for generating confidence-aware estimates of Sargassum distribution. This work supports efforts to manage these harmful algal blooms by providing vital information on detection certainty, while underscoring the critical need to empirically validate fractional cover proxies against in situ or UAV measurements. Full article
(This article belongs to the Section Spatial Data Science and Digital Earth)
Show Figures

Figure 1

29 pages, 11403 KB  
Article
In-Vivo Characterization of Healthy Retinal Pigment Epithelium and Photoreceptor Cells from AO-(T)FI Imaging
by Sohrab Ferdowsi, Leila Sara Eppenberger, Safa Mohanna, Oliver Pfäffli, Christoph Amstutz, Lucas M. Bachmann, Michael A. Thiel and Martin K. Schmid
Vision 2025, 9(4), 91; https://doi.org/10.3390/vision9040091 (registering DOI) - 1 Nov 2025
Abstract
We provide an automated characterization of human retinal cells, i.e., RPE’s based on the non-invasive AO-TFI retinal imaging and PR’s based on the non-invasive AO-FI retinal imaging on a large-scale study involving 171 confirmed healthy eyes from 104 participants of 23 to 80 [...] Read more.
We provide an automated characterization of human retinal cells, i.e., RPE’s based on the non-invasive AO-TFI retinal imaging and PR’s based on the non-invasive AO-FI retinal imaging on a large-scale study involving 171 confirmed healthy eyes from 104 participants of 23 to 80 years old. Comprehensive standard checkups based on SD-OCT and Fondus imaging modalities were carried out by Ophthalmologists from the Luzerner Kantonsspital (LUKS) to confirm the absence of retinal pathologies. AO imaging imaging was performed using the Cellularis® device and each eye was imaged at various retinal eccentricities. The images were automatically segmented using a dedicated software and RPE and PR cells were identified and morphometric characterizations, such as cell density and area were computed. The results were stratified based on various criteria, such as age, retinal eccentricity, visual acuity, etc. The automatic segmentation was validated independently on a held-out set by five trained medical students not involved in this study. We plotted cell density variations as a function of eccentricity from the fovea along both nasal and temporal directions. For RPE cells, no consistent trend in density was observed between 0° to 9° eccentricity, contrasting with established histological literature demonstrating foveal density peaks. In contrast, PR cell density showed a clear decrease from 2.5° to 9°. RPE cell density declined linearly with age, whereas no age-related pattern was detected for PR cell density. On average, RPE cell density was found to be ≈6313 cells/mm2 (±σ=757), while the average PR cell density was calculated as ≈10,207 cells/mm2 (±σ=1273). Full article
Show Figures

Figure 1

18 pages, 3793 KB  
Article
Water Body Identification from Satellite Images Using a Hybrid Evolutionary Algorithm-Optimized U-Net Framework
by Yue Yuan, Peiyang Wei, Zhixiang Qi, Xun Deng, Ji Zhang, Jianhong Gan, Tinghui Chen and Zhibin Li
Biomimetics 2025, 10(11), 732; https://doi.org/10.3390/biomimetics10110732 (registering DOI) - 1 Nov 2025
Abstract
Accurate and automated identification of water bodies from satellite imagery is critical for environmental monitoring, water resource management, and disaster response. Current deep learning approaches, however, suffer from a strong dependence on manual hyperparameter tuning, which limits their automation capability and robustness in [...] Read more.
Accurate and automated identification of water bodies from satellite imagery is critical for environmental monitoring, water resource management, and disaster response. Current deep learning approaches, however, suffer from a strong dependence on manual hyperparameter tuning, which limits their automation capability and robustness in complex, multi-scale scenarios. To overcome this limitation, this study proposes a fully automated segmentation framework that synergistically integrates an enhanced U-Net model with a novel hybrid evolutionary optimization strategy. Extensive experiments on public Kaggle and Sentinel-2 datasets demonstrate the superior performance of our method, which achieves a Pixel Accuracy of 96.79% and an F1-Score of 94.75, outperforming various mainstream baseline models by over 10% in key metrics. The framework effectively addresses the class imbalance problem and enhances feature representation without human intervention. This work provides a viable and efficient path toward fully automated remote sensing image analysis, with significant potential for application in large-scale water resource monitoring, dynamic environmental assessment, and emergency disaster management. Full article
22 pages, 1809 KB  
Article
Semantic-Aware Co-Parallel Network for Cross-Scene Hyperspectral Image Classification
by Xiaohui Li, Chenyang Jin, Yuntao Tang, Kai Xing and Xiaodong Yu
Sensors 2025, 25(21), 6688; https://doi.org/10.3390/s25216688 (registering DOI) - 1 Nov 2025
Abstract
Cross-scene classification of hyperspectral images poses significant challenges due to the lack of a priori knowledge and the differences in data distribution across scenes. While traditional studies have had limited use of a priori knowledge from other modalities, recent advancements in pre-trained large-scale [...] Read more.
Cross-scene classification of hyperspectral images poses significant challenges due to the lack of a priori knowledge and the differences in data distribution across scenes. While traditional studies have had limited use of a priori knowledge from other modalities, recent advancements in pre-trained large-scale language-vision models have shown strong performance on various downstream tasks, highlighting the potential of cross-modal assisted learning. In this paper, we propose a Semantic-aware Collaborative Parallel Network (SCPNet) to mitigate the impact of data distribution differences by incorporating linguistic modalities to assist in learning cross-domain invariant representations of hyperspectral images. SCPNet uses a parallel architecture consisting of a spatial–spectral feature extraction module and a multiscale feature extraction module, designed to capture rich image information during the feature extraction phase. The extracted features are then mapped into an optimized semantic space, where improved supervised contrastive learning clusters image features from the same category together while separating those from different categories. Semantic space bridges the gap between visual and linguistic modalities, enabling the model to mine cross-domain invariant representations from the linguistic modality. Experimental results demonstrate that SCPNet significantly outperforms existing methods on three publicly available datasets, confirming its effectiveness for cross-scene hyperspectral image classification tasks. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing, Analysis and Application)
16 pages, 3443 KB  
Article
Automated Detection and Grading of Renal Cell Carcinoma in Histopathological Images via Efficient Attention Transformer Network
by Hissa Al-kuwari, Belqes Alshami, Aisha Al-Khinji, Adnan Haider and Muhammad Arsalan
Med. Sci. 2025, 13(4), 257; https://doi.org/10.3390/medsci13040257 (registering DOI) - 1 Nov 2025
Abstract
Background: Renal Cell Carcinoma (RCC) is the most common type of kidney cancer and requires accurate histopathological grading for effective prognosis and treatment planning. However, manual grading is time-consuming, subjective, and susceptible to inter-observer variability. Objective: This study proposes EAT-Net (Efficient Attention Transformer [...] Read more.
Background: Renal Cell Carcinoma (RCC) is the most common type of kidney cancer and requires accurate histopathological grading for effective prognosis and treatment planning. However, manual grading is time-consuming, subjective, and susceptible to inter-observer variability. Objective: This study proposes EAT-Net (Efficient Attention Transformer Network), a dual-stream deep learning model designed to automate and enhance RCC grade classification from histopathological images. Method: EAT-Net integrates EfficientNetB0 for local feature extraction and a Vision Transformer (ViT) stream for capturing global contextual dependencies. The architecture incorporates Squeeze-and-Excitation (SE) modules to recalibrate feature maps, improving focus on informative regions. The model was trained and evaluated on two publicly available datasets, KMC-RENAL and RCCG-Net. Standard preprocessing was applied, and the model’s performance was assessed using accuracy, precision, recall, and F1-score. Results: EAT-Net achieved superior results compared to state-of-the-art models, with an accuracy of 92.25%, precision of 92.15%, recall of 92.12%, and F1-score of 92.25%. Ablation studies demonstrated the complementary value of the EfficientNet and ViT streams. Additionally, Grad-CAM visualizations confirmed that the model focuses on diagnostically relevant areas, supporting its interpretability and clinical relevance. Conclusion: EAT-Net offers an accurate, and explainable framework for RCC grading. Its lightweight architecture and high performance make it well-suited for clinical deployment in digital pathology workflows. Full article
Show Figures

Figure 1

31 pages, 15872 KB  
Article
Gated Attention-Augmented Double U-Net for White Blood Cell Segmentation
by Ilyes Benaissa, Athmane Zitouni, Salim Sbaa, Nizamettin Aydin, Ahmed Chaouki Megherbi, Abdellah Zakaria Sellam, Abdelmalik Taleb-Ahmed and Cosimo Distante
J. Imaging 2025, 11(11), 386; https://doi.org/10.3390/jimaging11110386 (registering DOI) - 1 Nov 2025
Abstract
Segmentation of white blood cells is critical for a wide range of applications. It aims to identify and isolate individual white blood cells from medical images, enabling accurate diagnosis and monitoring of diseases. In the last decade, many researchers have focused on this [...] Read more.
Segmentation of white blood cells is critical for a wide range of applications. It aims to identify and isolate individual white blood cells from medical images, enabling accurate diagnosis and monitoring of diseases. In the last decade, many researchers have focused on this task using U-Net, one of the most used deep learning architectures. To further enhance segmentation accuracy and robustness, recent advances have explored the combination of U-Net with other techniques, such as attention mechanisms and aggregation techniques. However, a common challenge in white blood cell image segmentation is the similarity between the cells’ cytoplasm and other surrounding blood components, which often leads to inaccurate or incomplete segmentation due to difficulties in distinguishing low-contrast or subtle boundaries, leaving a significant gap for improvement. In this paper, we propose GAAD-U-Net, a novel architecture that integrates attention-augmented convolutions to better capture ambiguous boundaries and complex structures such as overlapping cells and low-contrast regions, followed by a gating mechanism to further suppress irrelevant feature information. These two key components are integrated in the Double U-Net base architecture. Our model achieves state-of-the-art performance on white blood cell benchmark datasets, with a 3.4% Dice score coefficient (DSC) improvement specifically on the SegPC-2021 dataset. The proposed model achieves superior performance as measured by mean the intersection over union (IoU) and DSC, with notably strong segmentation performance even for difficult images. Full article
(This article belongs to the Special Issue Computer Vision for Medical Image Analysis)
Show Figures

Figure 1

25 pages, 2631 KB  
Article
Lightweight and Real-Time Driver Fatigue Detection Based on MG-YOLOv8 with Facial Multi-Feature Fusion
by Chengming Chen, Xinyue Liu, Meng Zhou, Zhijian Li, Zhanqi Du and Yandan Lin
J. Imaging 2025, 11(11), 385; https://doi.org/10.3390/jimaging11110385 (registering DOI) - 1 Nov 2025
Abstract
Driver fatigue is a primary factor in traffic accidents and poses a serious threat to road safety. To address this issue, this paper proposes a multi-feature fusion fatigue detection method based on an improved YOLOv8 model. First, the method uses an enhanced YOLOv8 [...] Read more.
Driver fatigue is a primary factor in traffic accidents and poses a serious threat to road safety. To address this issue, this paper proposes a multi-feature fusion fatigue detection method based on an improved YOLOv8 model. First, the method uses an enhanced YOLOv8 model to achieve high-precision face detection. Then, it crops the detected face regions. Next, the lightweight PFLD (Practical Facial Landmark Detector) model performs keypoint detection on the cropped images, extracting 68 facial feature points and calculating key indicators related to fatigue status. These indicators include the eye aspect ratio (EAR), eyelid closure percentage (PERCLOS), mouth aspect ratio (MAR), and head posture ratio (HPR). To mitigate the impact of individual differences on detection accuracy, the paper introduces a novel sliding window model that combines a dynamic threshold adjustment strategy with an exponential weighted moving average (EWMA) algorithm. Based on this framework, blink frequency (BF), yawn frequency (YF), and nod frequency (NF) are calculated to extract time-series behavioral features related to fatigue. Finally, the driver’s fatigue state is determined using a comprehensive fatigue assessment algorithm. Experimental results on the WIDER FACE and YAWDD datasets demonstrate this method’s significant advantages in improving detection accuracy and computational efficiency. By striking a better balance between real-time performance and accuracy, the proposed method shows promise for real-world driving applications. Full article
26 pages, 2078 KB  
Article
Integrating Dual Graph Constraints into Sparse Non-Negative Tucker Decomposition for Enhanced Co-Clustering
by Jing Han and Linzhang Lu
Mathematics 2025, 13(21), 3494; https://doi.org/10.3390/math13213494 (registering DOI) - 1 Nov 2025
Abstract
Collaborative clustering is an ensemble technique that enhances clustering performance by simultaneously and synergistically processing multiple data dimensions or tasks. This is an active research area in artificial intelligence, machine learning, and data mining. A common approach to co-clustering is based on non-negative [...] Read more.
Collaborative clustering is an ensemble technique that enhances clustering performance by simultaneously and synergistically processing multiple data dimensions or tasks. This is an active research area in artificial intelligence, machine learning, and data mining. A common approach to co-clustering is based on non-negative matrix factorization (NMF). While widely used, NMF-based co-clustering is limited by its bilinear nature and fails to capture the multilinear structure of data. With the objective of enhancing the effectiveness of non-negative Tucker decomposition (NTD) in image clustering tasks, in this paper, we propose a dual-graph constrained sparse non-negative Tucker decomposition NTD (GDSNTD) model for co-clustering. It integrates graph regularization, the Frobenius norm, and an l1 norm constraint to simultaneously optimize the objective function. The GDSNTD mode, featuring graph regularization on both factor matrices, more effectively discovers meaningful latent structures in high-order data. The addition of the l1 regularization constraint on the factor matrices may help identify the most critical original features, and the use of the Frobenius norm may produce a more highly stable and accurate solution to the optimization problem. Then, the convergence of the proposed method is proven, and the detailed derivation is provided. Finally, experimental results on public datasets demonstrate that the proposed model outperforms state-of-the-art methods in image clustering, achieving superior scores in accuracy and Normalized Mutual Information. Full article
20 pages, 3769 KB  
Article
Diagnostic Accuracy of a Multi-Target Artificial Intelligence Service for the Simultaneous Assessment of 16 Pathological Features on Chest and Abdominal CT
by Valentin A. Nechaev, Nataliya Y. Kashtanova, Evgenii V. Kopeykin, Umamat M. Magomedova, Maria S. Gribkova, Anton V. Hardin, Marina I. Sekacheva, Varvara D. Sanikovich, Valeria Y. Chernina and Victor A. Gombolevskiy
Diagnostics 2025, 15(21), 2778; https://doi.org/10.3390/diagnostics15212778 (registering DOI) - 1 Nov 2025
Abstract
Background/Objectives: Chest, abdominal, and pelvic computed tomography (CT) with intravenous contrast is widely used for tumor staging, treatment planning, and therapy monitoring. The integration of artificial intelligence (AI) services is expected to improve diagnostic accuracy across multiple anatomical regions simultaneously. We aimed [...] Read more.
Background/Objectives: Chest, abdominal, and pelvic computed tomography (CT) with intravenous contrast is widely used for tumor staging, treatment planning, and therapy monitoring. The integration of artificial intelligence (AI) services is expected to improve diagnostic accuracy across multiple anatomical regions simultaneously. We aimed to evaluate the diagnostic accuracy of a multi-target AI service in detecting 16 pathological features on chest and abdominal CT images. Methods: We conducted a retrospective study using anonymized CT data from an open dataset. A total of 229 CT scans were independently interpreted by four radiologists with more than 5 years of experience and analyzed by the AI service. Sixteen pathological features were assessed. AI errors were classified as minor, intermediate, or clinically significant. Diagnostic accuracy was evaluated using the area under the receiver operating characteristic curve (AUC). Results: Across 229 CT scans, the AI service made 423 errors (11.5% of all evaluated features, n = 3664). False positives accounted for 262 cases (61.9%) and false negatives for 161 (38.1%). Most errors were minor (62.9%) or intermediate (31.7%), while clinically significant errors comprised only 5.4%. The overall AUC of the AI service was 0.88 (95% CI: 0.87–0.89), compared with 0.78–0.81 for radiologists. For clinically significant findings, the AI AUC was 0.90 (95% CI: 0.71–1.00). Diagnostic accuracy was unsatisfactory only for urolithiasis. Conclusions: The multi-target AI service demonstrated high diagnostic accuracy for chest and abdominal CT interpretation, with most errors being clinically negligible; performance was limited for urolithiasis. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

15 pages, 3705 KB  
Article
Practical Considerations in Abdominal MRI: Sequences, Patient Preparation, and Clinical Applications
by Nicoleta Cazacu, Claudia G. Chilom, Cosmin Adrian and Costin A. Minoiu
Methods Protoc. 2025, 8(6), 129; https://doi.org/10.3390/mps8060129 (registering DOI) - 1 Nov 2025
Abstract
This study discusses the challenges encountered in implementing a detailed protocol for upper abdominal imaging using magnetic resonance imaging (MRI), ranging from patient preparation and sequence selection to clinical applications. MRI is a valuable non-invasive imaging modality employed both in the early detection [...] Read more.
This study discusses the challenges encountered in implementing a detailed protocol for upper abdominal imaging using magnetic resonance imaging (MRI), ranging from patient preparation and sequence selection to clinical applications. MRI is a valuable non-invasive imaging modality employed both in the early detection of diseases and as a complementary tool for the detailed characterization of various pathologies. Nevertheless, performing an abdominal MRI examination can be challenging; therefore, the understanding of sequences is particularly important, as changing the parameters can not only influence the quality of the images but also optimize scanning time improve patient experience during the examination. The methodology illustrates the purpose of each sequence and the critical role of appropriate patient preparation. Results highlighted the significance of these factors in the evaluation of hepatic lesions, showing that the proper choice of sequences and parameters is essential for distinguishing benign from malignant findings and for achieving an accurate diagnosis. It was also shown that MRI plays an important role as a complementary technique in investigation of upper abdominal pathologies in order to avoid overexposure to radiation. Full article
(This article belongs to the Section Biomedical Sciences and Physiology)
Show Figures

Figure 1

15 pages, 2362 KB  
Article
Quantifying Morphological Change in Stage III Lipedema: A 3D Imaging Study of Population Trends and Individual Treatment Courses
by Niels A. Sanktjohanser, Nikolaus Thierfelder, Benjamin Beck, Sinan Mert, Benedikt Fuchs, Paul S. Wiggenhauser, Riccardo E. Giunta and Konstantin C. Koban
J. Pers. Med. 2025, 15(11), 525; https://doi.org/10.3390/jpm15110525 (registering DOI) - 1 Nov 2025
Abstract
Background/Objectives: Lipedema is a chronic disorder characterized by disproportionate fat accumulation in the extremities, causing pain, bruising, and reduced mobility. When conservative therapy fails, liposuction is considered an effective treatment option. Prior studies often relied on subjective or non-standardized measures, limiting precision. [...] Read more.
Background/Objectives: Lipedema is a chronic disorder characterized by disproportionate fat accumulation in the extremities, causing pain, bruising, and reduced mobility. When conservative therapy fails, liposuction is considered an effective treatment option. Prior studies often relied on subjective or non-standardized measures, limiting precision. This study aimed to objectively assess volumetric changes after liposuction in stage III lipedema using high-resolution 3D imaging to quantify postoperative changes in circumference and volume, providing individualized yet standardized outcome measures aligned with precision medicine. Methods: We retrospectively analyzed 66 patients who underwent 161 water-assisted liposuctions (WALs). Pre- and postoperative measurements were performed with the VECTRA© WB360 system, allowing reproducible, anatomically specific quantification of limb volumes and circumferences. Secondary endpoints included in-hospital complications. Results: Liposuction achieved significant reductions in all treated regions, most pronounced in the proximal thigh and upper arm. Thigh volume decreased by 4.10–9.25% (q < 0.001), while upper arm volume decreased by 15.63% (left) and 20.15% (right) (q = 0.001). Circumference decreased by up to 5.2% in the thigh (q < 0.001) and 12.27% (q = 0.001) in the upper arm. All changes were calculated relative to baseline values, allowing personalized interpretation of treatment effects. Conclusions: This is the first study to objectively quantify postoperative lipedema changes using whole-body 3D surface imaging. By capturing each patient’s contours pre- and postoperatively, this approach enables individualized evaluation while permitting standardized comparison across patients. It offers a precise understanding of surgical outcomes and supports integration of precision medicine principles in lipedema surgery. Full article
(This article belongs to the Section Personalized Therapy in Clinical Medicine)
Show Figures

Figure 1

37 pages, 3827 KB  
Review
A Survey of Data Augmentation Techniques for Traffic Visual Elements
by Mengmeng Yang, Lay Sheng Ewe, Weng Kean Yew, Sanxing Deng and Sieh Kiong Tiong
Sensors 2025, 25(21), 6672; https://doi.org/10.3390/s25216672 (registering DOI) - 1 Nov 2025
Abstract
Autonomous driving is a cornerstone of intelligent transportation systems, where visual elements such as traffic signs, lights, and pedestrians are critical for safety and decision-making. Yet, existing datasets often lack diversity, underrepresent rare scenarios, and suffer from class imbalance, which limits the robustness [...] Read more.
Autonomous driving is a cornerstone of intelligent transportation systems, where visual elements such as traffic signs, lights, and pedestrians are critical for safety and decision-making. Yet, existing datasets often lack diversity, underrepresent rare scenarios, and suffer from class imbalance, which limits the robustness of object detection models. While earlier reviews have examined general image enhancement, a systematic analysis of dataset augmentation for traffic visual elements remains lacking. This paper presents a comprehensive investigation of enhancement techniques tailored for transportation datasets. It pursues three objectives: establishing a classification framework for autonomous driving scenarios, assessing performance gains from augmentation methods on tasks such as detection and classification, and providing practical insights to guide dataset improvement in both research and industry. Four principal approaches are analyzed, including image transformation, GAN-based generation, diffusion models, and composite methods, with discussion of their strengths, limitations, and emerging strategies. Nearly 40 traffic-related datasets and 10 evaluation metrics are reviewed to support benchmarking. Results show that augmentation improves robustness under challenging conditions, with hybrid methods often yielding the best outcomes. Nonetheless, key challenges remain, including computational costs, unstable GAN training, and limited rare scene data. Future work should prioritize lightweight models, richer semantic context, specialized datasets, and scalable, efficient strategies. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Graphical abstract

Back to TopTop