Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,628)

Search Parameters:
Keywords = vision assessment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 329 KB  
Article
Machine Learning-Based Prediction of Muscle Injury Risk in Professional Football: A Four-Year Longitudinal Study
by Francisco Martins, Hugo Sarmento, Élvio Rúbio Gouveia, Paulo Saveca and Krzysztof Przednowek
J. Clin. Med. 2025, 14(22), 8039; https://doi.org/10.3390/jcm14228039 (registering DOI) - 13 Nov 2025
Abstract
Background: Professional football requires more attention in planning work regimens that balance players’ sports performance optimization and reduce their injury probability. Machine learning applied to sports science has focused on predicting these events and identifying their risk factors. Our study aims to (i) [...] Read more.
Background: Professional football requires more attention in planning work regimens that balance players’ sports performance optimization and reduce their injury probability. Machine learning applied to sports science has focused on predicting these events and identifying their risk factors. Our study aims to (i) analyze the differences between injury incidence during training and matches and (ii) build and classify different predictive models of risk based on players’ internal and external loads across four sports seasons. Methods: This investigation involved 96 male football players (26.2 ± 4.2 years; 181.1 ± 6.1 cm; 74.5 ± 7.1 kg) representing a single professional football club across four analyzed seasons. The research was designed according to three methodological sets of assessments: (i) average season performance, (ii) two weeks’ performance before the event, and (iii) four weeks’ performance before the event. We applied machine learning classification methods to build and classify different predictive injury risk models for each dataset. The dependent variable is categorical, representing the occurrence of a time-loss muscle injury (N = 97). The independent variables include players’ information and external (GPS-derived) and internal (RPE) workload variables. Results: The Kstar classifier with the four-week window dataset achieved the best predictive performance, presenting an Area Under the Precision–Recall Curve (AUC-PR) of 83% and a balanced accuracy of 72%. Conclusions: In practical terms, this methodology provides technical staff with more reliable data to inform modifications to playing and training regimens. Future research should focus on understanding the technical staff’s qualitative vision of predictive models’ in-field applicability. Full article
17 pages, 1413 KB  
Article
Sustainable Urban Futures: Transportation and Development in Riyadh, Jeddah, and Neom
by Khalid Mohammed Almatar
Sustainability 2025, 17(22), 10133; https://doi.org/10.3390/su172210133 - 12 Nov 2025
Abstract
This study explores sustainable urbanism in the three largest Saudi Arabian cities—Riyadh, Jeddah, and NEOM—in the context of Vision 2030. Qualitative methodology was used, which incorporated environmental, social, economic, governance, and mobility aspects. The analysis of ten semi-structured interviews with planners, engineers, and [...] Read more.
This study explores sustainable urbanism in the three largest Saudi Arabian cities—Riyadh, Jeddah, and NEOM—in the context of Vision 2030. Qualitative methodology was used, which incorporated environmental, social, economic, governance, and mobility aspects. The analysis of ten semi-structured interviews with planners, engineers, and policy officials was based on Strategic Environmental Assessment (SEA), Sustainable Urbanism, and Participatory Governance models. The results indicate that Riyadh presents structural congruence and consistency of policies, Jeddah is characterized by disjointed governance and poor coordination, and NEOM is characterized by futuristic aspirations with unpredictable social inclusiveness. The paper highlights that more powerful integration of governance, participatory planning, and realistic implementation is required to create a balance between technological innovations and equity in society. It adds to the current knowledge of how the global sustainability models can be localized in the fast-changing cities of the Gulf. Full article
Show Figures

Figure 1

18 pages, 364 KB  
Article
Explainable Deep Learning for Endometriosis Classification in Laparoscopic Images
by Yixuan Zhu and Mahmoud Elbattah
BioMedInformatics 2025, 5(4), 63; https://doi.org/10.3390/biomedinformatics5040063 (registering DOI) - 12 Nov 2025
Abstract
Background/Objectives: Endometriosis is a chronic inflammatory condition that often requires laparoscopic examination for definitive diagnosis. Automated analysis of laparoscopic images using Deep Learning (DL) may support clinicians by improving diagnostic consistency and efficiency. This study aimed to develop and evaluate explainable DL models [...] Read more.
Background/Objectives: Endometriosis is a chronic inflammatory condition that often requires laparoscopic examination for definitive diagnosis. Automated analysis of laparoscopic images using Deep Learning (DL) may support clinicians by improving diagnostic consistency and efficiency. This study aimed to develop and evaluate explainable DL models for the binary classification of endometriosis using laparoscopic images from the publicly available GLENDA (Gynecologic Laparoscopic ENdometriosis DAtaset). Methods: Four representative architectures—ResNet50, EfficientNet-B2, EdgeNeXt_Small, and Vision Transformer (ViT-Small/16)—were systematically compared under class-imbalanced conditions using five-fold cross-validation. To enhance interpretability, Gradient-weighted Class Activation Mapping (Grad-CAM) and SHapley Additive exPlanations (SHAP) were applied for visual explanation, and their quantitative alignment with expert-annotated lesion masks was assessed using Intersection over Union (IoU), Dice coefficient, and Recall. Results: Among the evaluated models, EdgeNeXt_Small achieved the best trade-off between classification performance and computational efficiency. Grad-CAM produced spatially coherent visualizations that corresponded well with clinically relevant lesion regions. Conclusions: The study shows that lightweight convolutional neural network (CNN)–Transformer architectures, combined with quantitative explainability assessment, can identify endometriosis in laparoscopic images with reasonable accuracy and interpretability. These findings indicate that explainable AI methods may help improve diagnostic consistency by offering transparent visual cues that align with clinically relevant regions. Further validation in broader clinical settings is warranted to confirm their practical utility. Full article
(This article belongs to the Section Imaging Informatics)
Show Figures

Figure 1

25 pages, 7244 KB  
Article
Computer Vision for Cover Crop Seed-Mix Detection and Quantification
by Karishma Kumari, Kwanghee Won and Ali M. Nafchi
Seeds 2025, 4(4), 59; https://doi.org/10.3390/seeds4040059 (registering DOI) - 12 Nov 2025
Abstract
Cover crop mixes play an important role in enhancing soil health, nutrient turnover, and ecosystem resilience; yet, maintaining even seed dispersion and planting uniformity is difficult due to significant variances in seed physical and aerodynamic properties. These discrepancies produce non-uniform seeding and species [...] Read more.
Cover crop mixes play an important role in enhancing soil health, nutrient turnover, and ecosystem resilience; yet, maintaining even seed dispersion and planting uniformity is difficult due to significant variances in seed physical and aerodynamic properties. These discrepancies produce non-uniform seeding and species separation in drill hoppers, which has an impact on stand establishment and biomass stability. The thousand-grain weight is an important measure for determining cover crop seed quality and yield since it represents the weight of 1000 seeds in grams. Accurate seed counting is thus a key factor in calculating thousand-grain weight. Accurate mixed-seed identification is also helpful in breeding, phenotypic assessment, and the detection of moldy or damaged grains. However, in real-world conditions, the overlap and thickness of adhesion of mixed seeds make precise counting difficult, necessitating current research into powerful seed detection. This study addresses these issues by integrating deep learning-based computer vision algorithms for multi-seed detection and counting in cover crop mixes. The Canon LP-E6N R6 5D Mark IV camera was used to capture high-resolution photos of flax, hairy vetch, red clover, radish, and rye seeds. The dataset was annotated, augmented, and preprocessed on RoboFlow, split into train, validation, and test splits. Two top models, YOLOv5 and YOLOv7, were tested for multi-seed detection accuracy. The results showed that YOLOv7 outperformed YOLOv5 with 98.5% accuracy, 98.7% recall, and a mean Average Precision (mAP 0–95) of 76.0%. The results show that deep learning-based models can accurately recognize and count mixed seeds using automated methods, which has practical applications in seed drill calibration, thousand-grain weight estimation, and fair cover crop establishment. Full article
(This article belongs to the Special Issue Agrotechnics in Seed Quality: Current Progress and Challenges)
Show Figures

Figure 1

12 pages, 531 KB  
Article
Vision-Related Quality of Life in Patients with Optic Neuropathy: Insights from a Portuguese Single Center Using the NEI-VFQ-25
by Sofia Bezerra, Ricardo Soares dos Reis, Maria José Sá and Joana Guimarães
Neurol. Int. 2025, 17(11), 184; https://doi.org/10.3390/neurolint17110184 - 11 Nov 2025
Abstract
Background/Objectives: Optic neuropathies (ON) are a clinically heterogeneous group of disorders that can cause profound and lasting visual disability, with wide-ranging effects on patients’ quality of life. Although the NEI-VFQ-25 is an instrument for assessing vision-related quality of life (VRQoL), few studies [...] Read more.
Background/Objectives: Optic neuropathies (ON) are a clinically heterogeneous group of disorders that can cause profound and lasting visual disability, with wide-ranging effects on patients’ quality of life. Although the NEI-VFQ-25 is an instrument for assessing vision-related quality of life (VRQoL), few studies have systematically compared patient-reported outcomes across multiple ON subtypes, especially in underrepresented populations. We aimed to delineate how etiological differences and longitudinal visual acuity trajectories shape VRQoL in a diverse Portuguese cohort with ON. Methods: This retrospective, cross-sectional study included 152 patients diagnosed with ON and followed at São João University Hospital, Portugal. All participants completed the validated NEI-VFQ-25. Diagnosis-specific differences in VRQoL were interrogated using ANCOVA and linear mixed-effects models, controlling for age and sex. Visual acuity changes over time were analyzed in relation to patient-reported outcomes. Results: Substantial heterogeneity in VRQoL was observed across ON subtypes. Patients with MS-related ON (MS-RON) and idiopathic ON reported significantly higher NEI-VFQ-25 scores in domains such as general vision, mental health, and dependency (F = 3.30, p = 0.013; ηp2 = 0.08), while those with ischemic or other inflammatory etiologies showed persistently lower scores. Notably, both final visual acuity and diagnosis were independently associated with NEI-VFQ-25 composite scores, highlighting the correlation between objective and subjective measures of visual function. Age and diagnosis independently predicted poorer VRQoL. Conclusions: This study provides the first comprehensive evaluation of vision-related quality of life (VRQoL) across a diverse cohort of optic neuropathy patients in a Portuguese tertiary center, using the NEI-VFQ-25. Our results underscore the heterogeneity of functional impact across ON subtypes, emphasizing the value of integrating sensitive, multidimensional assessment tools into neuro-ophthalmic clinical care, especially in populations historically underrepresented in research. Full article
Show Figures

Figure 1

17 pages, 9161 KB  
Article
XBusNet: Text-Guided Breast Ultrasound Segmentation via Multimodal Vision–Language Learning
by Raja Mallina and Bryar Shareef
Diagnostics 2025, 15(22), 2849; https://doi.org/10.3390/diagnostics15222849 - 11 Nov 2025
Abstract
Background/Objectives: Precise breast ultrasound (BUS) segmentation supports reliable measurement, quantitative analysis, and downstream classification yet remains difficult for small or low-contrast lesions with fuzzy margins and speckle noise. Text prompts can add clinical context, but directly applying weakly localized text–image cues (e.g., CAM/CLIP-derived [...] Read more.
Background/Objectives: Precise breast ultrasound (BUS) segmentation supports reliable measurement, quantitative analysis, and downstream classification yet remains difficult for small or low-contrast lesions with fuzzy margins and speckle noise. Text prompts can add clinical context, but directly applying weakly localized text–image cues (e.g., CAM/CLIP-derived signals) tends to produce coarse, blob-like responses that smear boundaries unless additional mechanisms recover fine edges. Methods: We propose XBusNet, a novel dual-prompt, dual-branch multimodal model that combines image features with clinically grounded text. A global pathway based on a CLIP Vision Transformer encodes whole-image semantics conditioned on lesion size and location, while a local U-Net pathway emphasizes precise boundaries and is modulated by prompts that describe shape, margin, and Breast Imaging Reporting and Data System (BI-RADS) terms. Prompts are assembled automatically from structured metadata, requiring no manual clicks. We evaluate the model on the Breast Lesions USG (BLU) dataset using five-fold cross-validation. The primary metrics are Dice and Intersection over Union (IoU); we also conduct size-stratified analyses and ablations to assess the roles of the global and local paths and the text-driven modulation. Results: XBusNet achieves state-of-the-art performance on BLU, with a mean Dice of 0.8766 and IoU of 0.8150, outperforming six strong baselines. Small lesions show the largest gains, with fewer missed regions and fewer spurious activations. Ablation studies show complementary contributions of global context, local boundary modeling, and prompt-based modulation. Conclusions: A dual-prompt, dual-branch multimodal design that merges global semantics with local precision yields accurate BUS segmentation masks and improves robustness for small, low-contrast lesions. Full article
Show Figures

Figure 1

14 pages, 738 KB  
Opinion
Envisioning the Future of Machine Learning in the Early Detection of Neurodevelopmental and Neurodegenerative Disorders via Speech and Language Biomarkers
by Georgios P. Georgiou
Acoustics 2025, 7(4), 72; https://doi.org/10.3390/acoustics7040072 - 10 Nov 2025
Viewed by 157
Abstract
Speech and language offer a rich, non-invasive window into brain health. Advances in machine learning (ML) have enabled increasingly accurate detection of neurodevelopmental and neurodegenerative disorders through these modalities. This paper envisions the future of ML in the early detection of neurodevelopmental disorders [...] Read more.
Speech and language offer a rich, non-invasive window into brain health. Advances in machine learning (ML) have enabled increasingly accurate detection of neurodevelopmental and neurodegenerative disorders through these modalities. This paper envisions the future of ML in the early detection of neurodevelopmental disorders like autism spectrum disorder and attention-deficit/hyperactivity disorder, and neurodegenerative disorders, such as Parkinson’s disease and Alzheimer’s disease, through speech and language biomarkers. We explore the current landscape of ML techniques, including deep learning and multimodal approaches, and review their applications across various conditions, highlighting both successes and inherent limitations. Our core contribution lies in outlining future trends across several critical dimensions. These include the enhancement of data availability and quality, the evolution of models, the development of multilingual and cross-cultural models, the establishment of regulatory and clinical translation frameworks, and the creation of hybrid systems enabling human–artificial intelligence (AI) collaboration. Finally, we conclude with a vision for future directions, emphasizing the potential integration of ML-driven speech diagnostics into public health infrastructure, the development of patient-specific explainable AI, and its synergistic combination with genomics and brain imaging for holistic brain health assessment. Overcoming substantial hurdles in validation, generalization, and clinical adoption, the field is poised to shift toward ubiquitous, accessible, and highly personalized tools for early diagnosis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Acoustic Phonetics)
Show Figures

Figure 1

42 pages, 503 KB  
Article
DigStratCon: A Digital or Technology Strategy Framework
by Will Serrano
Adm. Sci. 2025, 15(11), 436; https://doi.org/10.3390/admsci15110436 - 10 Nov 2025
Viewed by 305
Abstract
Digital or Technology strategies are the first step of the Digital Transformation. The main risk is that information and assessments not included in the strategy and left to be confirmed and managed at later stages have the potential to negatively affect the successful [...] Read more.
Digital or Technology strategies are the first step of the Digital Transformation. The main risk is that information and assessments not included in the strategy and left to be confirmed and managed at later stages have the potential to negatively affect the successful implementation of the Digital Transformation, therefore negating sought-after business benefits. To mitigate this risk, this article proposes DigStratCon, a Digital or Technology strategy framework that generalises the Digital Transformation, detaching it from its specific functional application, such as marketing, products, Information Technology (IT), and Operational Technology (OT). Therefore, DigStratCon applies to any area within a government, organisation or infrastructure, including Data and Artificial Intelligence (AI). DigStratCon defines seven components within a Digital or Technology strategy, specifically (1) market research, (2) target state, (3) current state, (4) roadmap, (5) risks, (6) supply chain, and finally (7) enablers. A qualitative analysis of several United Kingdom (UK) government digital strategies assesses their completeness against the DigStratCon model. On average, UK digital strategies score 6/7 with an innovative and ambitious vision; however, they generally lack a common or standardised structure and wider international benchmark and alignment. Full article
(This article belongs to the Section Strategic Management)
Show Figures

Figure 1

12 pages, 2797 KB  
Perspective
Fixation Stability as a Surrogate for Reading Abilities in Age-Related Macular Degeneration: A Perspective
by Carolina Molin, Edoardo Midena, Enrica Convento, Giulia Midena and Elisabetta Pilotto
J. Clin. Med. 2025, 14(22), 7941; https://doi.org/10.3390/jcm14227941 - 9 Nov 2025
Viewed by 128
Abstract
Age-related macular degeneration (AMD) significantly impacts central vision, fixation site and stability and reading abilities. This work aims to analyze the relationship between retinal fixation parameters measured using microperimetry and reading performance in patients with AMD. We identified the role of fixation stability [...] Read more.
Age-related macular degeneration (AMD) significantly impacts central vision, fixation site and stability and reading abilities. This work aims to analyze the relationship between retinal fixation parameters measured using microperimetry and reading performance in patients with AMD. We identified the role of fixation stability measurement in the evaluation of reading abilities and discussed its implications both in clinical practice and in clinical trials. Our analysis highlights the importance of retinal fixation assessment as a precise surrogate for evaluating reading ability outcomes in AMD patients and as new clinical endpoint to demonstrate the functional effects of present and emerging target therapies. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

19 pages, 4107 KB  
Article
Structured Prompting and Collaborative Multi-Agent Knowledge Distillation for Traffic Video Interpretation and Risk Inference
by Yunxiang Yang, Ningning Xu and Jidong J. Yang
Computers 2025, 14(11), 490; https://doi.org/10.3390/computers14110490 - 9 Nov 2025
Viewed by 524
Abstract
Comprehensive highway scene understanding and robust traffic risk inference are vital for advancing Intelligent Transportation Systems (ITS) and autonomous driving. Traditional approaches often struggle with scalability and generalization, particularly under the complex and dynamic conditions of real-world environments. To address these challenges, we [...] Read more.
Comprehensive highway scene understanding and robust traffic risk inference are vital for advancing Intelligent Transportation Systems (ITS) and autonomous driving. Traditional approaches often struggle with scalability and generalization, particularly under the complex and dynamic conditions of real-world environments. To address these challenges, we introduce a novel structured prompting and multi-agent collaborative knowledge distillation framework that enables automatic generation of high-quality traffic scene annotations and contextual risk assessments. Our framework orchestrates two large vision–language models (VLMs): GPT-4o and o3-mini, using a structured Chain-of-Thought (CoT) strategy to produce rich, multiperspective outputs. These outputs serve as knowledge-enriched pseudo-annotations for supervised fine-tuning of a much smaller student VLM. The resulting compact 3B-scale model, named VISTA (Vision for Intelligent Scene and Traffic Analysis), is capable of understanding low-resolution traffic videos and generating semantically faithful, risk-aware captions. Despite its significantly reduced parameter count, VISTA achieves strong performance across established captioning metrics (BLEU-4, METEOR, ROUGE-L, and CIDEr) when benchmarked against its teacher models. This demonstrates that effective knowledge distillation and structured role-aware supervision can empower lightweight VLMs to capture complex reasoning capabilities. The compact architecture of VISTA facilitates efficient deployment on edge devices, enabling real-time risk monitoring without requiring extensive infrastructure upgrades. Full article
Show Figures

Figure 1

14 pages, 2627 KB  
Article
Computerized Full-Color Assessment for Distinguishing Color Vision Deficiency
by Jin-Cherng Hsu, Chia-Ying Tsai, Chih-Hsuan Shih, Shao-Rong Huang, Hsing-Yu Wu and Yung-Shin Sun
Diagnostics 2025, 15(22), 2837; https://doi.org/10.3390/diagnostics15222837 - 9 Nov 2025
Viewed by 197
Abstract
Background/Objectives: Current methods for diagnosing color vision deficiency (CVD) generally fall into two categories: computer-based tests that lack full-color lighting and non-computer-based tests that provide full-color lighting. Most of these approaches face several limitations, including inaccurate illumination of test samples, inconsistent test [...] Read more.
Background/Objectives: Current methods for diagnosing color vision deficiency (CVD) generally fall into two categories: computer-based tests that lack full-color lighting and non-computer-based tests that provide full-color lighting. Most of these approaches face several limitations, including inaccurate illumination of test samples, inconsistent test durations, learning effects, and the need for highly skilled operators. Methods: To address these limitations, this study introduces the Computerized Full-Color Assessment (CFCA) method, which employs a full-color light generation system based on 16 color spectra selected from the classical Farnsworth D-15 (D-15) test. In the CFCA method, each pair of colors generated by the system was presented under software control, and participants indicated within three seconds whether the colors were different. The total test duration was limited to 5 min. The method was validated using 10 normal trichromats and 11 patients with CVDs. Results: Results obtained from the CFCA were compared with those from the classical D-15 test using quantitative parameters, including confusion angle (CA) and confusion index (CI). Correlations between the two methods were analyzed. The p-values for CA and CI are 0.688 and 0.587, respectively, and the correlation coefficients are 0.821 for CA and 0.884 for CI, indicating a strong and statistically significant correlation. Conclusions: The CFCA method provides an accurate, convenient, and efficient tool for diagnosing CVD, with particular advantages for use in young children. It enables an expanded range of color choices beyond the 16 discs of the D-15 test and allows for the generation of individualized visual spectra, which can be applied in the design of customized color-vision-correcting glasses. Full article
(This article belongs to the Section Biomedical Optics)
Show Figures

Figure 1

41 pages, 1927 KB  
Systematic Review
Advancements in Small-Object Detection (2023–2025): Approaches, Datasets, Benchmarks, Applications, and Practical Guidance
by Ali Aldubaikhi and Sarosh Patel
Appl. Sci. 2025, 15(22), 11882; https://doi.org/10.3390/app152211882 - 7 Nov 2025
Viewed by 859
Abstract
Small-object detection (SOD) remains an important and growing challenge in computer vision and is the backbone of many applications, including autonomous vehicles, aerial surveillance, medical imaging, and industrial quality control. Small objects, in pixels, lose discriminative features during deep neural network processing, making [...] Read more.
Small-object detection (SOD) remains an important and growing challenge in computer vision and is the backbone of many applications, including autonomous vehicles, aerial surveillance, medical imaging, and industrial quality control. Small objects, in pixels, lose discriminative features during deep neural network processing, making them difficult to disentangle from background noise and other artifacts. This survey presents a comprehensive and systematic review of the SOD advancements between 2023 and 2025, a period marked by the maturation of transformer-based architectures and a return to efficient, realistic deployment. We applied the PRISMA methodology for this work, yielding 112 seminal works in the field to ensure the robustness of our foundation for this study. We present a critical taxonomy of the developments since 2023, arranged in five categories: (1) multiscale feature learning; (2) transformer-based architectures; (3) context-aware methods; (4) data augmentation enhancements; and (5) advancements to mainstream detectors (e.g., YOLO). Third, we describe and analyze the evolving SOD-centered datasets and benchmarks and establish the importance of evaluating models fairly. Fourth, we contribute a comparative assessment of state-of-the-art models, evaluating not only accuracy (e.g., the average precision for small objects (AP_S)) but also important efficiency (FPS, latency, parameters, GFLOPS) metrics across standardized hardware platforms, including edge devices. We further use data-driven case studies in the remote sensing, manufacturing, and healthcare domains to create a bridge between academic benchmarks and real-world performance. Finally, we summarize practical guidance for practitioners, the model selection decision matrix, scenario-based playbooks, and the deployment checklist. The goal of this work is to help synthesize the recent progress, identify the primary limitations in SOD, and open research directions, including the potential future role of generative AI and foundational models, to address the long-standing data and feature representation challenges that have limited SOD. Full article
Show Figures

Figure 1

47 pages, 55858 KB  
Article
A Soft Robotic Gripper for Crop Harvesting: Prototyping, Imaging, and Model-Based Control
by Yalun Jiang and Javad Mohammadpour Velni
AgriEngineering 2025, 7(11), 378; https://doi.org/10.3390/agriengineering7110378 - 7 Nov 2025
Viewed by 209
Abstract
The global agricultural sector faces escalating labor shortages and post-harvest losses, particularly in delicate crop handling. This study introduces an integrated soft robotic harvesting system addressing these challenges through four key innovations. First, a low-cost, high-yield fabrication method for silicone-based soft grippers is [...] Read more.
The global agricultural sector faces escalating labor shortages and post-harvest losses, particularly in delicate crop handling. This study introduces an integrated soft robotic harvesting system addressing these challenges through four key innovations. First, a low-cost, high-yield fabrication method for silicone-based soft grippers is proposed, reducing production costs by 60% via compressive-sealing molds. Second, a decentralized IoT architecture with edge computing achieves real-time performance (42 fps to 73 fps) on affordable hardware (around $180 per node). Third, a lightweight vision pipeline combines handcrafted geometric features and contrast analysis for crop maturity assessment and gripper tracking under occlusion. Fourth, a Neo-Hookean-based statics model incorporating circumferential stress and variable cross-sections reduces tip position errors to 5.138 mm. Experimental validation demonstrates 100% gripper fabrication yield and hybrid feedforward–feedback control efficacy. These advancements bridge the gap between laboratory prototypes and field-deployable solutions, offering scalable automation for perishable crop harvesting. Full article
Show Figures

Figure 1

16 pages, 4641 KB  
Article
Application of MambaBDA for Building Damage Assessment in the 2025 Los Angeles Wildfire
by Yangyang Yang, Wanchao Bian, Jiayi Fang, Minghao Tang, Zhonghua He, Ying Li and Gaofeng Fan
Buildings 2025, 15(22), 4019; https://doi.org/10.3390/buildings15224019 - 7 Nov 2025
Viewed by 174
Abstract
Timely detection of the spatial distribution of building damage in the immediate aftermath of a disaster is essential for guiding emergency response and recovery strategies. In January 2025, a large-scale wildfire struck the Los Angeles metropolitan area, with Altadena as one of the [...] Read more.
Timely detection of the spatial distribution of building damage in the immediate aftermath of a disaster is essential for guiding emergency response and recovery strategies. In January 2025, a large-scale wildfire struck the Los Angeles metropolitan area, with Altadena as one of the most severely affected regions. A timely and comprehensive understanding of the spatial distribution of building damage is essential for guiding rescue and resource allocation. In this study, we adopted the MambaBDA framework, which is built upon Mamba, a recently proposed state space architecture in the computer vision domain, and tailored it for spatio-temporal modeling of disaster impacts. The model was trained on the publicly available xBD dataset and subsequently applied to evaluate wildfire-induced building damage in Altadena, with pre- and post-disaster data acquired from WorldView-3 imagery captured during the 2025 Los Angeles wildfire. The workflow consisted of building localization and damage grading, followed by optimization to improve boundary accuracy and conversion to individual building-level assessments. Results show that about 28% of the buildings in Altadena suffered Major or Destroyed levels of damage. Population impact analysis, based on GHSL data, estimated approximately 3241 residents living in Major damage zones and 31,975 in Destroyed zones. These findings highlight the applicability of MambaBDA to wildfire scenarios, demonstrating its capability for efficient and transferable building damage assessment. The proposed approach provides timely information to support post-disaster rescue and recovery decision-making. Full article
(This article belongs to the Special Issue Risks and Challenges of AI-Driven Construction Industry)
Show Figures

Figure 1

20 pages, 3525 KB  
Article
Automated Assessment of Green Infrastructure Using E-nose, Integrated Visible-Thermal Cameras and Computer Vision Algorithms
by Areej Shahid, Sigfredo Fuentes, Claudia Gonzalez Viejo, Bryce Widdicombe and Ranjith R. Unnithan
Sensors 2025, 25(22), 6812; https://doi.org/10.3390/s25226812 - 7 Nov 2025
Viewed by 323
Abstract
The parameterization of vegetation indices (VIs) is crucial for sustainable irrigation and horticulture management, specifically for urban green infrastructure (GI) management. However, the constraints of roadside traffic, motor and industrially related pollution, and potential public vandalism compromise the efficacy of conventional in situ [...] Read more.
The parameterization of vegetation indices (VIs) is crucial for sustainable irrigation and horticulture management, specifically for urban green infrastructure (GI) management. However, the constraints of roadside traffic, motor and industrially related pollution, and potential public vandalism compromise the efficacy of conventional in situ monitoring systems. The shortcomings of prevalent satellites, UAVs, and manual/automated sensor measurements and monitoring systems have already been reviewed. This research proposes a novel urban GI monitoring system based on an integration of gas exchange and various VIs obtained from computer vision algorithms applied to data acquired from three novel sources: (1) Integrated gas sensor data using nine different volatile organic compounds using an electronic nose (E-nose), designed on a PCB for stable performance under variable environmental conditions; (2) Plant growth parameters including effective leaf area index (LAIe), infrared index (Ig), canopy temperature depression (CTD) and tree water stress index (TWSI); (3) Meteorological data for all measurement campaigns based on wind velocity, air temperature, rainfall, air pressure, and air humidity conditions. To account for spatial and temporal data acquisition variability, the integrated cameras and the E-nose were mounted on a vehicle roof to acquire information from 172 Elm trees planted across the Royal Parade, Melbourne. Results showed strong correlations among air contaminants, ambient conditions, and plant growth status, which can be modelled and optimized for better smart irrigation and environmental monitoring based on real-time data. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

Back to TopTop