Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,181)

Search Parameters:
Keywords = vision measurement system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 39341 KB  
Article
Recognition of Wood-Boring Insect Creeping Signals Based on Residual Denoising Vision Network
by Henglong Lin, Huajie Xue, Jingru Gong, Cong Huang, Xi Qiao, Liping Yin and Yiqi Huang
Sensors 2025, 25(19), 6176; https://doi.org/10.3390/s25196176 - 5 Oct 2025
Abstract
Currently, the customs inspection of wood-boring pests in timber still primarily relies on manual visual inspection, which involves observing insect holes on the timber surface and splitting the timber for confirmation. However, this method has significant drawbacks such as long detection time, high [...] Read more.
Currently, the customs inspection of wood-boring pests in timber still primarily relies on manual visual inspection, which involves observing insect holes on the timber surface and splitting the timber for confirmation. However, this method has significant drawbacks such as long detection time, high labor cost, and accuracy relying on human experience, making it difficult to meet the practical needs of efficient and intelligent customs quarantine. To address this issue, this paper develops a rapid identification system based on the peristaltic signals of wood-boring pests through the PyQt framework. The system employs a deep learning model with multi-attention mechanisms, namely the Residual Denoising Vision Network (RDVNet). Firstly, a LabVIEW-based hardware–software system is used to collect pest peristaltic signals in an environment free of vibration interference. Subsequently, the original signals are clipped, converted to audio format, and mixed with external noise. Then signal features are extracted through three cepstral feature extraction methods Mel-Frequency Cepstral Coefficients (MFCC), Power-Normalized Cepstral Coefficients (PNCC), and RelAtive SpecTrAl-Perceptual Linear Prediction (RASTA-PLP) and input into the model. In the experimental stage, this paper compares the denoising module of RDVNet (de-RDVNet) with four classic denoising models under five noise intensity conditions. Finally, it evaluates the performance of RDVNet and four other noise reduction classification models in classification tasks. The results show that PNCC has the most comprehensive feature extraction capability. When PNCC is used as the model input, de-RDVNet achieves an average peak signal-to-noise ratio (PSNR) of 29.8 and a Structural Similarity Index Measure (SSIM) of 0.820 in denoising experiments, both being the best among the comparative models. In classification experiments, RDVNet has an average F1 score of 0.878 and an accuracy of 92.8%, demonstrating the most excellent performance. Overall, the application of this system in customs timber quarantine can effectively improve detection efficiency and reduce labor costs and has significant practical value and promotion prospects. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

26 pages, 16624 KB  
Article
Design and Evaluation of an Automated Ultraviolet-C Irradiation System for Maize Seed Disinfection and Monitoring
by Mario Rojas, Claudia Hernández-Aguilar, Juana Isabel Méndez, David Balderas-Silva, Arturo Domínguez-Pacheco and Pedro Ponce
Sensors 2025, 25(19), 6070; https://doi.org/10.3390/s25196070 - 2 Oct 2025
Abstract
This study presents the development and evaluation of an automated ultraviolet-C irradiation system for maize seed treatment, emphasizing disinfection performance, environmental control, and vision-based monitoring. The system features dual 8-watt ultraviolet-C lamps, sensors for temperature and humidity, and an air extraction unit to [...] Read more.
This study presents the development and evaluation of an automated ultraviolet-C irradiation system for maize seed treatment, emphasizing disinfection performance, environmental control, and vision-based monitoring. The system features dual 8-watt ultraviolet-C lamps, sensors for temperature and humidity, and an air extraction unit to regulate the microclimate of the chamber. Without air extraction, radiation stabilized within one minute, with internal temperatures increasing by 5.1 °C and humidity decreasing by 13.26% over 10 min. When activated, the extractor reduced heat build-up by 1.4 °C, minimized humidity fluctuations (4.6%), and removed odors, although it also attenuated the intensity of ultraviolet-C by up to 19.59%. A 10 min ultraviolet-C treatment significantly reduced the fungal infestation in maize seeds by 23.5–26.25% under both extraction conditions. Thermal imaging confirmed localized heating on seed surfaces, which stressed the importance of temperature regulation during exposure. Notable color changes (ΔE>2.3) in treated seeds suggested radiation-induced pigment degradation. Ultraviolet-C intensity mapping revealed spatial non-uniformity, with measurements limited to a central axis, indicating the need for comprehensive spatial analysis. The integrated computer vision system successfully detected seed contours and color changes under high-contrast conditions, but underperformed under low-light or uneven illumination. These limitations highlight the need for improved image processing and consistent lighting to ensure accurate monitoring. Overall, the chamber shows strong potential as a non-chemical seed disinfection tool. Future research will focus on improving radiation uniformity, assessing effects on germination and plant growth, and advancing system calibration, safety mechanisms, and remote control capabilities. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Graphical abstract

25 pages, 12510 KB  
Article
Computer Vision-Based Optical Odometry Sensors: A Comparative Study of Classical Tracking Methods for Non-Contact Surface Measurement
by Ignas Andrijauskas, Marius Šumanas, Andrius Dzedzickis, Wojciech Tanaś and Vytautas Bučinskas
Sensors 2025, 25(19), 6051; https://doi.org/10.3390/s25196051 - 1 Oct 2025
Abstract
This article presents a principled framework for selecting and tuning classical computer vision algorithms in the context of optical displacement sensing. By isolating key factors that affect algorithm behavior—such as feed window size and motion step size—the study seeks to move beyond intuition-based [...] Read more.
This article presents a principled framework for selecting and tuning classical computer vision algorithms in the context of optical displacement sensing. By isolating key factors that affect algorithm behavior—such as feed window size and motion step size—the study seeks to move beyond intuition-based practices and provide rigorous, repeatable performance evaluations. Computer vision-based optical odometry sensors offer non-contact, high-precision measurement capabilities essential for modern metrology and robotics applications. This paper presents a systematic comparative analysis of three classical tracking algorithms—phase correlation, template matching, and optical flow—for 2D surface displacement measurement using synthetic image sequences with subpixel-accurate ground truth. A virtual camera system generates controlled test conditions using a multi-circle trajectory pattern, enabling systematic evaluation of tracking performance using 400 × 400 and 200 × 200 pixel feed windows. The systematic characterization enables informed algorithm selection based on specific application requirements rather than empirical trial-and-error approaches. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

16 pages, 1698 KB  
Article
Fall Detection by Deep Learning-Based Bimodal Movement and Pose Sensing with Late Fusion
by Haythem Rehouma and Mounir Boukadoum
Sensors 2025, 25(19), 6035; https://doi.org/10.3390/s25196035 - 1 Oct 2025
Abstract
The timely detection of falls among the elderly remains challenging. Single modality sensing approaches using inertial measurement units (IMUs) or vision-based monitoring systems frequently exhibit high false positives and compromised accuracy under suboptimal operating conditions. We propose a novel bimodal deep learning-based bimodal [...] Read more.
The timely detection of falls among the elderly remains challenging. Single modality sensing approaches using inertial measurement units (IMUs) or vision-based monitoring systems frequently exhibit high false positives and compromised accuracy under suboptimal operating conditions. We propose a novel bimodal deep learning-based bimodal sensing framework to address the problem, by leveraging a memory-based autoencoder neural network for inertial abnormality detection and an attention-based neural network for visual pose assessment, with late fusion at the decision level. Our experimental evaluation with a custom dataset of simulated falls and routine activities, captured with waist-mounted IMUs and RGB cameras under dim lighting, shows significant performance improvement by the described bimodal late-fusion system, with an F1-score of 97.3% and, most notably, a false-positive rate of 3.6% significantly lower than the 11.3% and 8.9% with IMU-only and vision-only baselines, respectively. These results confirm the robustness of the described fall detection approach and validate its applicability to real-time fall detection under different light settings, including nighttime conditions. Full article
(This article belongs to the Special Issue Sensor-Based Human Activity Recognition)
Show Figures

Figure 1

18 pages, 4675 KB  
Article
Advancing Soil Assessment: Vision-Based Monitoring for Subgrade Quality and Dynamic Modulus
by Koohyar Faizi, Robert Evans and Rolands Kromanis
Geotechnics 2025, 5(4), 67; https://doi.org/10.3390/geotechnics5040067 - 1 Oct 2025
Abstract
Accurate evaluation of subgrade behaviour under dynamic loading is essential for the long-term performance of transport infrastructure. While the Light Weight Deflectometer (LWD) is commonly used to assess subgrade stiffness, it provides only a single stiffness value and may not fully capture the [...] Read more.
Accurate evaluation of subgrade behaviour under dynamic loading is essential for the long-term performance of transport infrastructure. While the Light Weight Deflectometer (LWD) is commonly used to assess subgrade stiffness, it provides only a single stiffness value and may not fully capture the time-dependent response of soil. This study presents an image-based vision system developed to monitor soil surface displacements during loading, enabling more detailed analysis of dynamic behaviour. The system incorporates high-speed cameras and MATLAB-based computer vision algorithms to track vertical movement of the plate during impact. Laboratory and field experiments were conducted to evaluate the system’s performance, with results compared directly to those from the LWD. A strong correlation was observed (R2 = 0.9901), with differences between the two methods ranging from 0.8% to 13%, confirming the accuracy of the vision-based measurements despite the limited dataset. The findings highlight the system’s potential as a practical and cost-effective tool for enhancing subgrade assessment, particularly in applications requiring improved understanding of ground response under repeated or transient loading. Full article
(This article belongs to the Special Issue Recent Advances in Geotechnical Engineering (3rd Edition))
Show Figures

Figure 1

5 pages, 155 KB  
Editorial
Traffic Safety Measures and Assessment
by Juan Li and Bobin Wang
Appl. Sci. 2025, 15(19), 10532; https://doi.org/10.3390/app151910532 - 29 Sep 2025
Abstract
Traffic safety is undergoing a profound transformation, driven by advances in data science, sensing technologies, and computational modeling. Proactive approaches are enabling the early identification of potential hazards, real-time decision-making, and the development of smarter, safer transportation systems. This Special Issue summarizes recent [...] Read more.
Traffic safety is undergoing a profound transformation, driven by advances in data science, sensing technologies, and computational modeling. Proactive approaches are enabling the early identification of potential hazards, real-time decision-making, and the development of smarter, safer transportation systems. This Special Issue summarizes recent progress in traffic safety assessment, highlighting the application of emerging tools such as machine learning, explainable artificial intelligence, and computer vision. These innovations are used to predict crash risks, evaluate surrogate safety measures, and automate the analysis of behavioral data, contributing to more inclusive and adaptive safety frameworks, particularly for vulnerable road users such as pedestrians and cyclists. The research also addresses key challenges, including data integration across diverse sources, aligning safety metrics with human perception, and ensuring the scalability of models in complex environments. By advancing both technical methodologies and human-centered evaluation, these developments signal a shift toward more intelligent, transparent, and equitable approaches to traffic safety assessment and policy-making. Full article
(This article belongs to the Special Issue Traffic Safety Measures and Assessment)
25 pages, 6044 KB  
Article
Computer Vision-Based Multi-Feature Extraction and Regression for Precise Egg Weight Measurement in Laying Hen Farms
by Yunxiao Jiang, Elsayed M. Atwa, Pengguang He, Jinhui Zhang, Mengzui Di, Jinming Pan and Hongjian Lin
Agriculture 2025, 15(19), 2035; https://doi.org/10.3390/agriculture15192035 - 28 Sep 2025
Abstract
Egg weight monitoring provides critical data for calculating the feed-to-egg ratio, and improving poultry farming efficiency. Installing a computer vision monitoring system in egg collection systems enables efficient and low-cost automated egg weight measurement. However, its accuracy is compromised by egg clustering during [...] Read more.
Egg weight monitoring provides critical data for calculating the feed-to-egg ratio, and improving poultry farming efficiency. Installing a computer vision monitoring system in egg collection systems enables efficient and low-cost automated egg weight measurement. However, its accuracy is compromised by egg clustering during transportation and low-contrast edges, which limits the widespread adoption of such methods. To address this, we propose an egg measurement method based on a computer vision and multi-feature extraction and regression approach. The proposed pipeline integrates two artificial neural networks: Central differential-EfficientViT YOLO (CEV-YOLO) and Egg Weight Measurement Network (EWM-Net). CEV-YOLO is an enhanced version of YOLOv11, incorporating central differential convolution (CDC) and efficient Vision Transformer (EfficientViT), enabling accurate pixel-level egg segmentation in the presence of occlusions and low-contrast edges. EWM-Net is a custom-designed neural network that utilizes the segmented egg masks to perform advanced feature extraction and precise weight estimation. Experimental results show that CEV-YOLO outperforms other YOLO-based models in egg segmentation, with a precision of 98.9%, a recall of 97.5%, and an Average Precision (AP) at an Intersection over Union (IoU) threshold of 0.9 (AP90) of 89.8%. EWM-Net achieves a mean absolute error (MAE) of 0.88 g and an R2 of 0.926 in egg weight measurement, outperforming six mainstream regression models. This study provides a practical and automated solution for precise egg weight measurement in practical production scenarios, which is expected to improve the accuracy and efficiency of feed-to-egg ratio measurement in laying hen farms. Full article
(This article belongs to the Section Agricultural Product Quality and Safety)
Show Figures

Figure 1

15 pages, 14701 KB  
Article
Vision-Based Characterization of Gear Transmission Mechanisms to Improve 3D Laser Scanner Accuracy
by Fernando Lopez-Medina, José A. Núñez-López, Oleg Sergiyenko, Dennis Molina-Quiroz, Cesar Sepulveda-Valdez, Jesús R. Herrera-García, Vera Tyrsa and Ruben Alaniz-Plata
Metrology 2025, 5(4), 58; https://doi.org/10.3390/metrology5040058 - 25 Sep 2025
Abstract
Some laser scanners utilize stepper motor-driven optomechanical assemblies to position the laser beam precisely during triangulation. In laser scanners such as the presented Technical Vision System (TVS), to enhance motion resolution, gear transmissions are implemented between the motor and the optical assembly. However, [...] Read more.
Some laser scanners utilize stepper motor-driven optomechanical assemblies to position the laser beam precisely during triangulation. In laser scanners such as the presented Technical Vision System (TVS), to enhance motion resolution, gear transmissions are implemented between the motor and the optical assembly. However, due to the customized nature of the mechanical design, errors in manufacturing or insufficient mechanical characterization can introduce deviations in the computed 3D coordinates. In this work, we present a novel method for estimating the degrees-per-step ratio at the output of the laser positioner’s transmission mechanism using a stereovision system. Experimental results demonstrate the effectiveness of the proposed method, which reduces the need for manual metrological instruments and simplifies the calibration procedure through vision-assisted measurements. The method yielded estimated angular resolutions of approximately 0.06° and 0.07° per motor step in the horizontal and vertical axes, respectively, key parameters that define the minimal resolvable displacement of the projected beam in dynamic triangulation. Full article
(This article belongs to the Special Issue Advancements in Optical Measurement Devices and Technologies)
Show Figures

Figure 1

15 pages, 2454 KB  
Article
Fluorescence-Based In Vitro Detection of Wound-Associated Bacteria with a Handheld Imaging System
by Jonas Horn, Anna Dalinskaya, Emil Paluch, Finn-Ole Nord and Johannes Ruopp
Diagnostics 2025, 15(19), 2436; https://doi.org/10.3390/diagnostics15192436 - 24 Sep 2025
Viewed by 75
Abstract
Background: Chronic and acute wounds are often colonized by polymicrobial biofilms, delaying healing and complicating treatment. Rapid, non-invasive detection of pathogenic bacteria is therefore crucial for timely and targeted therapy. This study investigated porphyrin-producing bacterial species using the handheld cureVision imaging system. Methods: [...] Read more.
Background: Chronic and acute wounds are often colonized by polymicrobial biofilms, delaying healing and complicating treatment. Rapid, non-invasive detection of pathogenic bacteria is therefore crucial for timely and targeted therapy. This study investigated porphyrin-producing bacterial species using the handheld cureVision imaging system. Methods: In this study, 20 clinically relevant, porphyrin-producing bacterial species were cultured on δ-aminolevulinic acid (ALA)-supplemented agar and analyzed using the handheld cureVision imaging system under 405 nm excitation. Both Red-Green-Blue (RGB) and fluorescence images were acquired under ambient daylight conditions, and fluorescence signals were quantified by grayscale intensity analysis. Results: All tested species exhibited measurable red porphyrin-associated fluorescence, with the highest intensities observed in Klebsiella pneumoniae, Klebsiella oxytoca, Veillonella parvula, and Alcaligenes faecalis. A standardized detectability threshold of 0.25, derived from negative controls, enabled semi-quantitative comparison across species. Statistical analysis confirmed that the fluorescence intensities of all bacterial samples were significantly elevated compared to the control (Wilcoxon signed-rank test and sign test, both p < 0.001; median intensity = 0.835, IQR: 0.63–0.975). Conclusions: These results demonstrate that the cureVision system enables robust and reliable detection of porphyrin-producing wound bacteria, supporting its potential as a rapid, non-invasive diagnostic method for assessing wound colonization and guiding targeted clinical interventions. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Graphical abstract

11 pages, 4334 KB  
Communication
Real-Time Object Classification via Dual-Pixel Measurement
by Jianing Yang, Ran Chen, Yicheng Peng, Lingyun Zhang, Ting Sun and Fei Xing
Sensors 2025, 25(18), 5886; https://doi.org/10.3390/s25185886 - 20 Sep 2025
Viewed by 222
Abstract
Achieving rapid and accurate object classification holds significant importance in various domains. However, conventional vision-based techniques suffer from several limitations, including high data redundancy and strong dependence on image quality. In this work, we present a high-speed, image-free object classification method based on [...] Read more.
Achieving rapid and accurate object classification holds significant importance in various domains. However, conventional vision-based techniques suffer from several limitations, including high data redundancy and strong dependence on image quality. In this work, we present a high-speed, image-free object classification method based on dual-pixel measurement and normalized central moment invariants. Leveraging the complementary modulation capability of a digital micromirror device (DMD), the proposed system requires only five tailored binary illumination patterns to simultaneously extract geometric features and perform classification. The system can achieve a classification update rate of up to 4.44 kHz, offering significant improvements in both efficiency and accuracy compared to traditional image-based approaches. Numerical simulations verify the robustness of the method under similarity transformations—including translation, scaling, and rotation—while experimental validations further demonstrate reliable performance across diverse object types. This approach enables real-time, low-data throughput, and reconstruction-free classification, offering new potential for optical computing and edge intelligence applications. Full article
Show Figures

Figure 1

17 pages, 4400 KB  
Article
Prediction of the Live Weight of Pigs in the Growing and Finishing Phases Through 3D Images in a Semiarid Region
by Nicoly Farias Gomes, Maria Vitória Neves de Melo, Maria Eduarda Gonçalves de Oliveira, Gledson Luiz Pontes de Almeida, Kenny Ruben Montalvo Morales, Taize Cavalcante Santana, Héliton Pandorfi, João Paulo Silva do Monte Lima, Alexson Pantaleão Machado de Carvalho, Rafaella Resende Andrade, Marcio Mesquita and Marcos Vinícius da Silva
AgriEngineering 2025, 7(9), 307; https://doi.org/10.3390/agriengineering7090307 - 19 Sep 2025
Viewed by 264
Abstract
Estimated population growth and increased demand for food production bring with them the evident need for more efficient and sustainable production systems. Because of this, computer vision plays a fundamental role in the development and application of solutions that help producers with the [...] Read more.
Estimated population growth and increased demand for food production bring with them the evident need for more efficient and sustainable production systems. Because of this, computer vision plays a fundamental role in the development and application of solutions that help producers with the issues that limit livestock production in Brazil and the world. In addition to being stressful for the producer and the animal, the conventional pig weighing system causes productive losses and can compromise meat quality, being considered a practice that does not value animal welfare. The objective was to develop a computational procedure to predict the live weight of pigs in the growth and finishing phases, through the volume of the animals extracted through the processing of 3D images, as well as to analyze the real and estimated biometric measurements to define the relationships of these with live weight and volume obtained. The study was conducted at Roçadinho farm, in the municipality of Capoeiras, located in the Agreste region of the state of Pernambuco, Brazil. The variables weight and 3D images were obtained using a Kinect®—V2 camera and biometric measurements of 20 animals in the growth phase and 24 animals in the finishing phase, males and females, from the crossing of Pietrain and Large White, totaling 44 animals. To analyze the images, a program developed in Python (PyCharm Community Edition 2020.1.4) was used, to relate the variables, principal component analyses and regression analyzes were performed. The coefficient of linear determination between weight and volume was 73.3, 74.1, and 97.3% for pigs in the growing, finishing, and global phases, showing that this relationship is positive and satisfactorily expressed the weight of the animals. The relationship between the real and estimated biometric variables had a more expressive coefficient of determination in the global phase, having presented values between 77 and 94%. Full article
Show Figures

Figure 1

11 pages, 2811 KB  
Article
Real-Time Rice Milling Morphology Detection Using Hybrid Framework of YOLOv8 Instance Segmentation and Oriented Bounding Boxes
by Benjamin Ilo, Daniel Rippon, Yogang Singh, Alex Shenfield and Hongwei Zhang
Electronics 2025, 14(18), 3691; https://doi.org/10.3390/electronics14183691 - 18 Sep 2025
Viewed by 276
Abstract
Computer vision and image processing techniques have had great success in the food and drink industry. These technologies are used to analyse images, convert images to greyscale, and extract high-dimensional numerical data from the images; however, when it comes to real-time grain and [...] Read more.
Computer vision and image processing techniques have had great success in the food and drink industry. These technologies are used to analyse images, convert images to greyscale, and extract high-dimensional numerical data from the images; however, when it comes to real-time grain and rice milling processes, this technology has several limitations compared to other applications. Currently, milled rice image samples are collected and separated to avoid one contacting the another during analysis. This approach is not suitable for real-time industrial implementation. However, real-time analysis can be accomplished by utilising artificial intelligence (AI) and machine learning (ML) approaches instead of traditional quality assessment methods, such as manual inspection, which are labour-intensive, time-consuming, and prone to human error. To address these challenges, this paper presents a novel approach for real-time rice morphology analysis during milling by integrating You Only Look Once version 8 (YOLOv8) instance segmentation and Oriented Bounding Box (OBB) detection models. While instance segmentation excels in detecting and classifying both touching and overlapping grains, it underperforms in precise size estimation. Conversely, the object-oriented bounding box detection model provides more accurate size measurements but struggles with touching and overlapping grains. Experiments demonstrate that the hybrid system resolves key limitations of standalone models: instance segmentation alone achieves high detection accuracy (92% mAP@0.5) but struggles with size errors (0.35 mm MAE), while OBB alone reduces the size error to 0.12 mm MAE but falters with complex grain arrangements (88% mAP@0.5). By combining these approaches, our unified pipeline achieves superior performance, improving detection precision (99.5% mAP@0.5), segmentation quality (86% mask IoU), and size estimation (0.10 mm MAE). This represents a 71% reduction in size error compared to segmentation-only models and a 6% boost in detection accuracy over OBB-only methods. This study highlights the potential of advanced deep learning techniques in enhancing the automation and optimisation of quality control in rice milling processes. Full article
Show Figures

Figure 1

21 pages, 6059 KB  
Article
A Precision Measurement Method for Rooftop Photovoltaic Capacity Using Drone and Publicly Available Imagery
by Yue Hu, Yuce Liu, Yu Zhang, Hongwei Dong, Chongzheng Li, Hongzhi Mao, Fusong Wang and Meng Wang
Buildings 2025, 15(18), 3377; https://doi.org/10.3390/buildings15183377 - 17 Sep 2025
Viewed by 210
Abstract
Against the global backdrop of energy transition, the precise assessment of urban rooftop photovoltaic (PV) system capacity is recognized as crucial for optimizing the energy structure and enhancing the sustainable utilization efficiency of spatial resources. Publicly available aerial imagery is characterized by non-orthorectified [...] Read more.
Against the global backdrop of energy transition, the precise assessment of urban rooftop photovoltaic (PV) system capacity is recognized as crucial for optimizing the energy structure and enhancing the sustainable utilization efficiency of spatial resources. Publicly available aerial imagery is characterized by non-orthorectified issues; direct utilization is known to lead to geometric distortions in rooftop PV and errors in capacity prediction. To address this, a dual-optimization framework is proposed in this study, integrating monocular vision-based 3D reconstruction with a lightweight linear model. Leveraging the orthogonal characteristics of building structures, camera self-calibration and 3D reconstruction are achieved through geometric constraints imposed by vanishing points. Scale distortion is suppressed via the incorporation of a multi-dimensional geometric constraint error control strategy. Concurrently, a linear capacity-area model is constructed, thereby simplifying the complexity inherent in traditional multi-parameter fitting. Utilizing drone oblique photography and Google Earth public imagery, 3D reconstruction was performed for 20 PV-equipped buildings in Wuhan City. Two buildings possessing high-precision field survey data were selected as typical experimental subjects for validation. The results demonstrate that the 3D reconstruction method reduced the mean absolute percentage error (MAPE)—used here as an estimator of measurement uncertainty—of PV area identification from 10.58% (achieved by the 2D method) to 3.47%, while the coefficient of determination (R2) for the capacity model reached 0.9548. These results suggest that this methodology can provide effective technical support for low-cost, high-precision urban rooftop PV resource surveys. It has the potential to significantly enhance the reliability of energy planning data, thereby contributing to the efficient development of urban spatial resources and the achievement of sustainable energy transition goals. Full article
(This article belongs to the Special Issue Research on Solar Energy System and Storage for Sustainable Buildings)
Show Figures

Figure 1

22 pages, 5930 KB  
Article
A Computer Vision-Based Pedestrian Flow Management System for Footbridges and Its Applications
by Can Zhao, Yiyang Jiang and Jinfeng Wang
Infrastructures 2025, 10(9), 247; https://doi.org/10.3390/infrastructures10090247 - 17 Sep 2025
Viewed by 297
Abstract
Urban footbridges are critical infrastructure increasingly challenged by vibration issues induced by crowd activity. Real-time monitoring of pedestrian dynamics is essential for evaluating structural safety, ensuring pedestrian comfort, and enabling proactive management. This paper proposes a lightweight, fully automated computer vision system for [...] Read more.
Urban footbridges are critical infrastructure increasingly challenged by vibration issues induced by crowd activity. Real-time monitoring of pedestrian dynamics is essential for evaluating structural safety, ensuring pedestrian comfort, and enabling proactive management. This paper proposes a lightweight, fully automated computer vision system for real-time monitoring of crowd dynamics on footbridges. The system integrates object detection, multi-target tracking, and monocular depth estimation to precisely quantify key crowd metrics: pedestrian flow rate, density, and velocity. Experimental validation demonstrated high performance: Flow rate estimation achieved 92.7% accuracy; density estimation yielded a 2.05% average relative error; and velocity estimation showed an 8.7% average relative error. Furthermore, the system demonstrates practical utility by successfully categorizing pedestrian behaviors using velocity data and triggering timely warnings. Crucially, field tests confirmed a minimum error of 5.56% between bridge vibration simulations driven by the system’s captured crowd data and physically measured acceleration data. This high agreement validates the system’s capability to provide reliable inputs for structural assessment. The proposed system establishes a practical technological foundation for intelligent footbridge management, focusing on safety, comfort, and operational efficiency through real-time crowd insights and automated alerts. Full article
Show Figures

Figure 1

33 pages, 2085 KB  
Review
Advances in Nondestructive Technologies for External Eggshell Quality Evaluation
by Pengpeng Yu, Chaoping Shen, Junhui Cheng, Xifeng Yin, Chao Liu and Ziting Yu
Sensors 2025, 25(18), 5796; https://doi.org/10.3390/s25185796 - 17 Sep 2025
Viewed by 412
Abstract
The structural integrity of poultry eggs is essential for food safety, economic value, and hatchability. External eggshell quality—measured by thickness, strength, cracks, color, and cleanliness—is a key criterion for grading and sorting. Traditional assessment methods, although simple, suffer from subjectivity, low efficiency, and [...] Read more.
The structural integrity of poultry eggs is essential for food safety, economic value, and hatchability. External eggshell quality—measured by thickness, strength, cracks, color, and cleanliness—is a key criterion for grading and sorting. Traditional assessment methods, although simple, suffer from subjectivity, low efficiency, and destructive nature. In contrast, recent developments in nondestructive testing (NDT) technologies have enabled precise, automated, and real-time evaluation of eggshell characteristics. This review systematically summarizes state-of-the-art NDT techniques including acoustic resonance, ultrasonic imaging, terahertz spectroscopy, machine vision, and electrical property sensing. Deep learning and sensor fusion methods are highlighted for their superior accuracy in microcrack detection (up to 99.4%) and shell strength prediction. We further discuss emerging challenges such as noise interference, signal variability, and scalability for industrial deployment. The integration of explainable AI, multimodal data acquisition, and edge computing is proposed as a future direction to develop intelligent, scalable, and cost-effective eggshell inspection systems. This comprehensive analysis provides a valuable reference for advancing nondestructive quality control in poultry product supply chains. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Back to TopTop