Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (599)

Search Parameters:
Keywords = remote automated monitoring

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 507 KB  
Article
Clinical Assessment of a Virtual Reality Perimeter Versus the Humphrey Field Analyzer: Comparative Reliability, Usability, and Prospective Applications
by Marco Zeppieri, Caterina Gagliano, Francesco Cappellani, Federico Visalli, Fabiana D’Esposito, Alessandro Avitabile, Roberta Amato, Alessandra Cuna and Francesco Pellegrini
Vision 2025, 9(4), 86; https://doi.org/10.3390/vision9040086 (registering DOI) - 11 Oct 2025
Viewed by 76
Abstract
Background: This study compared the performance of a Head-mounted Virtual Reality Perimeter (HVRP) with the Humphrey Field Analyzer (HFA), the standard in automated perimetry. The HFA is the established standard for automated perimetry but is constrained by lengthy testing, bulky equipment, and limited [...] Read more.
Background: This study compared the performance of a Head-mounted Virtual Reality Perimeter (HVRP) with the Humphrey Field Analyzer (HFA), the standard in automated perimetry. The HFA is the established standard for automated perimetry but is constrained by lengthy testing, bulky equipment, and limited patient comfort. Comparative data on newer head-mounted virtual reality perimeters are limited, leaving uncertainty about their clinical reliability and potential advantages. Aim: The aim was to evaluate parameters such as visual field outcomes, portability, patient comfort, eye tracking, and usability. Methods: Participants underwent testing with both devices, assessing metrics like mean deviation (MD), pattern standard deviation (PSD), and duration. Results: The HVRP demonstrated small but statistically significant differences in MD and PSD compared to the HFA, while maintaining a consistent trend across participants. MD values were slightly more negative for HFA than HVRP (average difference −0.60 dB, p = 0.0006), while pattern standard deviation was marginally higher with HFA (average difference 0.38 dB, p = 0.00018). Although statistically significant, these differences were small in magnitude and do not undermine the clinical utility or reproducibility of the device. Notably, HVRP showed markedly shorter testing times with HVRP (7.15 vs. 18.11 min, mean difference 10.96 min, p < 0.0001). Its lightweight, portable design allowed for bedside and home testing, enhancing accessibility for pediatric, geriatric, and mobility-impaired patients. Participants reported greater comfort due to the headset design, which eliminated the need for chin rests. The device also offers potential for AI integration and remote data analysis. Conclusions: The HVRP proved to be a reliable, user-friendly alternative to traditional perimetry. Its advantages in comfort, portability, and test efficiency support its use in both clinical settings and remote screening programs for visual field assessment. Its portability and user-friendly design support broader use in clinical practice and expand possibilities for bedside assessment, home monitoring, and remote screening, particularly in populations with limited access to conventional perimetry. Full article
Show Figures

Figure 1

21 pages, 14964 KB  
Article
An Automated Framework for Abnormal Target Segmentation in Levee Scenarios Using Fusion of UAV-Based Infrared and Visible Imagery
by Jiyuan Zhang, Zhonggen Wang, Jing Chen, Fei Wang and Lyuzhou Gao
Remote Sens. 2025, 17(20), 3398; https://doi.org/10.3390/rs17203398 - 10 Oct 2025
Viewed by 158
Abstract
Levees are critical for flood defence, but their integrity is threatened by hazards such as piping and seepage, especially during high-water-level periods. Traditional manual inspections for these hazards and associated emergency response elements, such as personnel and assets, are inefficient and often impractical. [...] Read more.
Levees are critical for flood defence, but their integrity is threatened by hazards such as piping and seepage, especially during high-water-level periods. Traditional manual inspections for these hazards and associated emergency response elements, such as personnel and assets, are inefficient and often impractical. While UAV-based remote sensing offers a promising alternative, the effective fusion of multi-modal data and the scarcity of labelled data for supervised model training remain significant challenges. To overcome these limitations, this paper reframes levee monitoring as an unsupervised anomaly detection task. We propose a novel, fully automated framework that unifies geophysical hazards and emergency response elements into a single analytical category of “abnormal targets” for comprehensive situational awareness. The framework consists of three key modules: (1) a state-of-the-art registration algorithm to precisely align infrared and visible images; (2) a generative adversarial network to fuse the thermal information from IR images with the textural details from visible images; and (3) an adaptive, unsupervised segmentation module where a mean-shift clustering algorithm, with its hyperparameters automatically tuned by Bayesian optimization, delineates the targets. We validated our framework on a real-world dataset collected from a levee on the Pajiang River, China. The proposed method demonstrates superior performance over all baselines, achieving an Intersection over Union of 0.348 and a macro F1-Score of 0.479. This work provides a practical, training-free solution for comprehensive levee monitoring and demonstrates the synergistic potential of multi-modal fusion and automated machine learning for disaster management. Full article
Show Figures

Figure 1

23 pages, 12281 KB  
Article
Vegetation Classification and Extraction of Urban Green Spaces Within the Fifth Ring Road of Beijing Based on YOLO v8
by Bin Li, Xiaotian Xu, Yingrui Duan, Hongyu Wang, Xu Liu, Yuxiao Sun, Na Zhao, Shaoning Li and Shaowei Lu
Land 2025, 14(10), 2005; https://doi.org/10.3390/land14102005 - 6 Oct 2025
Viewed by 335
Abstract
Real-time, accurate and detailed monitoring of urban green space is of great significance for constructing the urban ecological environment and maximizing ecological benefits. Although high-resolution remote sensing technology provides rich ground object information, it also makes the surface information of urban green spaces [...] Read more.
Real-time, accurate and detailed monitoring of urban green space is of great significance for constructing the urban ecological environment and maximizing ecological benefits. Although high-resolution remote sensing technology provides rich ground object information, it also makes the surface information of urban green spaces more complex. Existing classification methods often struggle to meet the requirements of classification accuracy and the automation demands of high-resolution images. This study utilized GF-7 remote sensing imagery to construct an urban green space classification method for Beijing. The study used the YOLO v8 model as the framework to conduct a fine classification of urban green spaces within the Fifth Ring Road of Beijing, distinguishing between evergreen trees, deciduous trees, shrubs and grasslands. The aims were to address the limitations of insufficient model fit and coarse-grained classifications in existing studies, and to improve vegetation extraction accuracy for green spaces in northern temperate cities (with Beijing as a typical example). The results show that the overall classification accuracy of the trained YOLO v8 model is 89.60%, which is 25.3% and 28.8% higher than that of traditional machine learning methods such as Maximum Likelihood and Support Vector Machine, respectively. The model achieved extraction accuracies of 92.92%, 93.40%, 87.67%, and 93.34% for evergreen trees, deciduous trees, shrubs, and grasslands, respectively. This result confirms that the combination of deep learning and high-resolution remote sensing images can effectively enhance the classification extraction of urban green space vegetation, providing technical support and data guarantees for the refined management of green spaces and “garden cities” in megacities such as Beijing. Full article
(This article belongs to the Special Issue Vegetation Cover Changes Monitoring Using Remote Sensing Data)
Show Figures

Figure 1

26 pages, 16624 KB  
Article
Design and Evaluation of an Automated Ultraviolet-C Irradiation System for Maize Seed Disinfection and Monitoring
by Mario Rojas, Claudia Hernández-Aguilar, Juana Isabel Méndez, David Balderas-Silva, Arturo Domínguez-Pacheco and Pedro Ponce
Sensors 2025, 25(19), 6070; https://doi.org/10.3390/s25196070 - 2 Oct 2025
Viewed by 269
Abstract
This study presents the development and evaluation of an automated ultraviolet-C irradiation system for maize seed treatment, emphasizing disinfection performance, environmental control, and vision-based monitoring. The system features dual 8-watt ultraviolet-C lamps, sensors for temperature and humidity, and an air extraction unit to [...] Read more.
This study presents the development and evaluation of an automated ultraviolet-C irradiation system for maize seed treatment, emphasizing disinfection performance, environmental control, and vision-based monitoring. The system features dual 8-watt ultraviolet-C lamps, sensors for temperature and humidity, and an air extraction unit to regulate the microclimate of the chamber. Without air extraction, radiation stabilized within one minute, with internal temperatures increasing by 5.1 °C and humidity decreasing by 13.26% over 10 min. When activated, the extractor reduced heat build-up by 1.4 °C, minimized humidity fluctuations (4.6%), and removed odors, although it also attenuated the intensity of ultraviolet-C by up to 19.59%. A 10 min ultraviolet-C treatment significantly reduced the fungal infestation in maize seeds by 23.5–26.25% under both extraction conditions. Thermal imaging confirmed localized heating on seed surfaces, which stressed the importance of temperature regulation during exposure. Notable color changes (ΔE>2.3) in treated seeds suggested radiation-induced pigment degradation. Ultraviolet-C intensity mapping revealed spatial non-uniformity, with measurements limited to a central axis, indicating the need for comprehensive spatial analysis. The integrated computer vision system successfully detected seed contours and color changes under high-contrast conditions, but underperformed under low-light or uneven illumination. These limitations highlight the need for improved image processing and consistent lighting to ensure accurate monitoring. Overall, the chamber shows strong potential as a non-chemical seed disinfection tool. Future research will focus on improving radiation uniformity, assessing effects on germination and plant growth, and advancing system calibration, safety mechanisms, and remote control capabilities. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Graphical abstract

31 pages, 1983 KB  
Review
Integrating Remote Sensing and Autonomous Robotics in Precision Agriculture: Current Applications and Workflow Challenges
by Magdalena Łągiewska and Ewa Panek-Chwastyk
Agronomy 2025, 15(10), 2314; https://doi.org/10.3390/agronomy15102314 - 30 Sep 2025
Viewed by 750
Abstract
Remote sensing technologies are increasingly integrated with autonomous robotic platforms to enhance data-driven decision-making in precision agriculture. Rather than replacing conventional platforms such as satellites or UAVs, autonomous ground robots complement them by enabling high-resolution, site-specific observations in real time, especially at the [...] Read more.
Remote sensing technologies are increasingly integrated with autonomous robotic platforms to enhance data-driven decision-making in precision agriculture. Rather than replacing conventional platforms such as satellites or UAVs, autonomous ground robots complement them by enabling high-resolution, site-specific observations in real time, especially at the plant level. This review analyzes how remote sensing sensors—including multispectral, hyperspectral, LiDAR, and thermal—are deployed via robotic systems for specific agricultural tasks such as canopy mapping, weed identification, soil moisture monitoring, and precision spraying. Key benefits include higher spatial and temporal resolution, improved monitoring of under-canopy conditions, and enhanced task automation. However, the practical deployment of such systems is constrained by terrain complexity, power demands, and sensor calibration. The integration of artificial intelligence and IoT connectivity emerges as a critical enabler for responsive, scalable solutions. By focusing on how autonomous robots function as mobile sensor platforms, this article contributes to the understanding of their role within modern precision agriculture workflows. The findings support future development pathways aimed at increasing operational efficiency and sustainability across diverse crop systems. Full article
Show Figures

Figure 1

35 pages, 17848 KB  
Article
Satellite-Based Multi-Decadal Shoreline Change Detection by Integrating Deep Learning with DSAS: Eastern and Southern Coastal Regions of Peninsular Malaysia
by Saima Khurram, Amin Beiranvand Pour, Milad Bagheri, Effi Helmy Ariffin, Mohd Fadzil Akhir and Saiful Bahri Hamzah
Remote Sens. 2025, 17(19), 3334; https://doi.org/10.3390/rs17193334 - 29 Sep 2025
Viewed by 371
Abstract
Coasts are critical ecological, economic and social interfaces between terrestrial and marine systems. The current upsurge in the acquisition and availability of remote sensing datasets, such as Landsat remote sensing data series, provides new opportunities for analyzing multi-decadal coastal changes and other components [...] Read more.
Coasts are critical ecological, economic and social interfaces between terrestrial and marine systems. The current upsurge in the acquisition and availability of remote sensing datasets, such as Landsat remote sensing data series, provides new opportunities for analyzing multi-decadal coastal changes and other components of coastal risk. The emergence of machine learning-based techniques represents a new trend that can support large-scale coastal monitoring and modeling using remote sensing big data. This study presents a comprehensive multi-decadal analysis of coastal changes for the period from 1990 to 2024 using Landsat remote sensing data series along the eastern and southern coasts of Peninsular Malaysia. These coastal regions include the states of Kelantan, Terengganu, Pahang, and Johor. An innovative approach combining deep learning-based shoreline extraction with the Digital Shoreline Analysis System (DSAS) was meticulously applied to the Landsat datasets. Two semantic segmentation models, U-Net and DeepLabV3+, were evaluated for automated shoreline delineation from the Landsat imagery, with U-Net demonstrating superior boundary precision and generalizability. The DSAS framework quantified shoreline change metrics—including Net Shoreline Movement (NSM), Shoreline Change Envelope (SCE), and Linear Regression Rate (LRR)—across the states of Kelantan, Terengganu, Pahang, and Johor. The results reveal distinct spatial–temporal patterns: Kelantan exhibited the highest rates of shoreline change with erosion of −64.9 m/year and accretion of up to +47.6 m/year; Terengganu showed a moderated change partly due to recent coastal protection structures; Pahang displayed both significant erosion, particularly south of the Pahang River with rates of over −50 m/year, and accretion near river mouths; Johor’s coastline predominantly exhibited accretion, with NSM values of over +1900 m, linked to extensive land reclamation activities and natural sediment deposition, although local erosion was observed along the west coast. This research highlights emerging erosion hotspots and, in some regions, the impact of engineered coastal interventions, providing critical insights for sustainable coastal zone management in Malaysia’s monsoon-influenced tropical coastal environment. The integrated deep learning and DSAS approach applied to Landsat remote sensing data series provides a scalable and reproducible framework for long-term coastal monitoring and climate adaptation planning around the world. Full article
Show Figures

Figure 1

17 pages, 20573 KB  
Article
Digital Twin-Based Intelligent Monitoring System for Robotic Wiring Process
by Jinhua Cai, Hongchang Ding, Ping Wang, Xiaoqiang Guo, Han Hou, Tao Jiang and Xiaoli Qiao
Sensors 2025, 25(19), 5978; https://doi.org/10.3390/s25195978 - 26 Sep 2025
Viewed by 510
Abstract
In response to the growing demand for automation in aerospace harness manufacturing, this study proposes a digital twin-based intelligent monitoring system for robotic wiring operations. The system integrates a seven-degree-of-freedom robotic platform with an adaptive servo gripper and employs a five-dimensional digital twin [...] Read more.
In response to the growing demand for automation in aerospace harness manufacturing, this study proposes a digital twin-based intelligent monitoring system for robotic wiring operations. The system integrates a seven-degree-of-freedom robotic platform with an adaptive servo gripper and employs a five-dimensional digital twin framework to synchronize physical and virtual entities. Key innovations include a coordinated motion model for minimizing joint displacement, a particle-swarm-optimized backpropagation neural network (PSO-BPNN) for adaptive gripping based on wire characteristics, and a virtual–physical closed-loop interaction strategy covering the entire wiring process. Methodologically, the system enables motion planning, quality prediction, and remote monitoring through Unity3D visualization, SQL-driven data processing, and real-time mapping. The experimental results demonstrate that the system can stably and efficiently complete complex wiring tasks with 1:1 trajectory reproduction. Moreover, the PSO-BPNN model significantly reduces prediction error compared to standard BPNN methods. The results confirm the system’s capability to ensure precise wire placement, enhance operational efficiency, and reduce error risks. This work offers a practical and intelligent solution for aerospace harness production and shows strong potential for extension to multi-robot collaboration and full production line scheduling. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

42 pages, 5042 KB  
Review
A Comprehensive Review of Remote Sensing and Artificial Intelligence Integration: Advances, Applications, and Challenges
by Nikolay Kazanskiy, Roman Khabibullin, Artem Nikonorov and Svetlana Khonina
Sensors 2025, 25(19), 5965; https://doi.org/10.3390/s25195965 - 25 Sep 2025
Viewed by 1290
Abstract
The integration of remote sensing (RS) and artificial intelligence (AI) has revolutionized Earth observation, enabling automated, efficient, and precise analysis of vast and complex datasets. RS techniques, leveraging satellite imagery, aerial photography, and ground-based sensors, provide critical insights into environmental monitoring, disaster response, [...] Read more.
The integration of remote sensing (RS) and artificial intelligence (AI) has revolutionized Earth observation, enabling automated, efficient, and precise analysis of vast and complex datasets. RS techniques, leveraging satellite imagery, aerial photography, and ground-based sensors, provide critical insights into environmental monitoring, disaster response, agriculture, and urban planning. The rapid developments in AI, specifically machine learning (ML) and deep learning (DL), have significantly enhanced the processing and interpretation of RS data. AI-powered models, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and reinforcement learning (RL) algorithms, have demonstrated remarkable capabilities in feature extraction, classification, anomaly detection, and predictive modeling. This paper provides a comprehensive survey of the latest developments at the intersection of RS and AI, highlighting key methodologies, applications, and emerging challenges. While AI-driven RS offers unprecedented opportunities for automation and decision-making, issues related to model generalization, explainability, data heterogeneity, and ethical considerations remain significant hurdles. The review concludes by discussing future research directions, emphasizing the need for improved model interpretability, multimodal learning, and real-time AI deployment for global-scale applications. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 12345 KB  
Article
Automatic Speech Recognition of Public Safety Radio Communications for Interstate Incident Detection and Notification
by Christopher M. Gartner, Vihaan Vajpayee, Jairaj Desai and Darcy M. Bullock
Smart Cities 2025, 8(5), 157; https://doi.org/10.3390/smartcities8050157 - 24 Sep 2025
Viewed by 388
Abstract
Most urban areas have Traffic Management Centers that rely partially on communication with 9-1-1 centers for incident detection. This level of awareness is often lacking for rural interstates spanning several 9-1-1 centers. This paper presents a novel approach to extending TMC visibility by [...] Read more.
Most urban areas have Traffic Management Centers that rely partially on communication with 9-1-1 centers for incident detection. This level of awareness is often lacking for rural interstates spanning several 9-1-1 centers. This paper presents a novel approach to extending TMC visibility by automatically monitoring regional 9-1-1 dispatch channels using off-the-shelf hardware and open-source speech-to-text libraries. Our study presents a proof-of-concept study servicing 71 miles of rural I-65 in Indiana, successfully monitoring four county dispatch centers from a single location, and efficiently transcribing live audio within 60 s of broadcast. This work’s primary contribution is demonstrating the feasibility and practical value of automated incident detection systems for rural interstates. This technology is implementation-ready for extending the visibility of Traffic Management Centers in rural interstate segments. Further work is underway for developing scalable procedures for integrating multiple remote sites, extracting more diverse keyword sets, investigating optimal speech-to-text models, and assessing the technical aspects of the experimental procedures of this manuscript. Full article
Show Figures

Figure 1

26 pages, 12387 KB  
Article
Mapping for Larimichthys crocea Aquaculture Information with Multi-Source Remote Sensing Data Based on Segment Anything Model
by Xirui Xu, Ke Nie, Sanling Yuan, Wei Fan, Yanan Lu and Fei Wang
Fishes 2025, 10(10), 477; https://doi.org/10.3390/fishes10100477 - 24 Sep 2025
Viewed by 346
Abstract
Monitoring Larimichthys crocea aquaculture in a low-cost, efficient and flexible manner with remote sensing data is crucial for the optimal management and the sustainable development of aquaculture industry and aquaculture industry intelligent fisheries. An innovative automated framework, based on the Segment Anything Model [...] Read more.
Monitoring Larimichthys crocea aquaculture in a low-cost, efficient and flexible manner with remote sensing data is crucial for the optimal management and the sustainable development of aquaculture industry and aquaculture industry intelligent fisheries. An innovative automated framework, based on the Segment Anything Model (SAM) and multi-source high-resolution remote sensing image data, is proposed for high-precision aquaculture facility extraction and overcomes the problems of low efficiency and limited accuracy in traditional manual inspection methods. The research method includes systematic optimization of SAM segmentation parameters for different data sources and strict evaluation of model performance at multiple spatial resolutions. Additionally, the impact of different spectral band combinations on the segmentation effect is systematically analyzed. Experimental results demonstrate a significant correlation between resolution and accuracy, with UAV-derived imagery achieving exceptional segmentation accuracy (97.71%), followed by Jilin-1 (91.64%) and Sentinel-2 (72.93%) data. Notably, the NIR-Blue-Red band combination exhibited superior performance in delineating aquaculture infrastructure, suggesting its optimal utility for such applications. A robust and scalable solution for automatically extracting facilities is established, which offers significant insights for extending SAM’s capabilities to broader remote sensing applications within marine resource assessment domains. Full article
(This article belongs to the Section Fishery Facilities, Equipment, and Information Technology)
Show Figures

Graphical abstract

24 pages, 11665 KB  
Article
Response of Nearby Sensors to Variable Doses of Nitrogen Fertilization in Winter Fodder Crops Under Mediterranean Climate
by Luís Silva, Caroline Brunelli, Raphael Moreira, Sofia Barbosa, Manuela Fernandes, Andreia Miguel, Benvindo Maçãs, Constantino Valero, Manuel Patanita, Fernando Cebola Lidon and Luís Alcino Conceição
Sensors 2025, 25(18), 5811; https://doi.org/10.3390/s25185811 - 17 Sep 2025
Viewed by 432
Abstract
The sustainable intensification of forage production in Mediterranean climates requires technological solutions that optimize the use of agricultural inputs. This study aimed to evaluate the performance of proximal optical sensors in recommending and monitoring variable rate nitrogen fertilization in winter forage crops cultivated [...] Read more.
The sustainable intensification of forage production in Mediterranean climates requires technological solutions that optimize the use of agricultural inputs. This study aimed to evaluate the performance of proximal optical sensors in recommending and monitoring variable rate nitrogen fertilization in winter forage crops cultivated under Mediterranean conditions. A handheld multispectral active sensor (HMA), a multispectral camera on an unmanned aircraft vehicle (UAV), and one passive on-the-go sensor (OTG) were used to generate real-time nitrogen (N) application prescriptions. The sensors were assessed for their correlation with agronomic parameters such as plant fresh matter (PFM), plant dry matter (PDM), plant N content (PNC), crude protein (CP) in%, crude protein yield (CPyield) per unit of area, and N uptake (NUp). The real-time N fertilization stood out by promoting a 15.23% reduction in the total N fertilizer applied compared to a usual farmer-fixed dose of 150 kg ha−1, saving 22.90 kg ha−1 without compromising crop productivity. Additionally, NDVI_OTG showed moderate simple linear correlation with PFM (R2 = 0.52), confirming its effectiveness in prescription based on vegetative vigor. UAV_II (NDVI after fertilization) showed even stronger correlations with CP (R2 = 0.58), CPyield (R2 = 0.53), and NUp (R2 = 0.53), highlighting its sensitivity to physiological responses induced by N fertilization. Although the HMA sensor operates via point readings, it also proved effective, with significant correlations to NUp (R2 = 0.55) and CPyield (R2 = 0.53). It is concluded that integrating sensors enables both precise input prescription and efficient monitoring of plant physiological responses, fostering cost-effectiveness, sustainability, and improved agronomic efficiency. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

9 pages, 486 KB  
Proceeding Paper
A Comprehensive Remote Monitoring System for Automated Diabetes Risk Assessment and Control Through Smart Wearables and Personal Health Devices
by Jawad Ali, Manzoor Hussain and Trisiani Dewi Hendrawati
Eng. Proc. 2025, 107(1), 91; https://doi.org/10.3390/engproc2025107091 - 15 Sep 2025
Viewed by 451
Abstract
Diabetes, a chronic metabolic disease marked by elevated blood glucose levels, affects millions of people globally. A lower quality of life and a markedly higher chance of potentially deadly consequences, such as heart disease, renal failure, and other organ dysfunctions, are closely linked [...] Read more.
Diabetes, a chronic metabolic disease marked by elevated blood glucose levels, affects millions of people globally. A lower quality of life and a markedly higher chance of potentially deadly consequences, such as heart disease, renal failure, and other organ dysfunctions, are closely linked to it. In order to effectively manage diabetes and avoid serious consequences, early detection and ongoing monitoring are essential. Remote health monitoring has emerged as a viable and promising option for proactive healthcare due to the development of contemporary technology, particularly in the areas of wearables and mobile computing. In this work, we suggest a thorough and sophisticated framework for remote monitoring that is intended to automatically predict, identify, and manage diabetes risks. To facilitate real-time data collection analysis and tailored feedback, the system makes use of the integration of smartphones, wearable sensors, and specialized medical equipment. In addition to enhancing patient engagement and lowering the strain on conventional healthcare infrastructures, our suggested model aims to assist patients and healthcare providers in maintaining improved glycemic control. We employed a tenfold stratified cross-validation approach to assess the efficacy of our framework and the results showed remarkable performance metrics. A score of 79.00 percent for clarity (specificity) 87.20 percent for sensitivity, and 83.20 percent for accuracy were all attained by the system. The outcomes show how our framework can be a dependable and scalable remote diabetes management solution, opening the door to more intelligent and easily accessible healthcare systems around the world. Full article
Show Figures

Figure 1

23 pages, 10375 KB  
Article
Extraction of Photosynthetic and Non-Photosynthetic Vegetation Cover in Typical Grasslands Using UAV Imagery and an Improved SegFormer Model
by Jie He, Xiaoping Zhang, Weibin Li, Du Lyu, Yi Ren and Wenlin Fu
Remote Sens. 2025, 17(18), 3162; https://doi.org/10.3390/rs17183162 - 12 Sep 2025
Viewed by 487
Abstract
Accurate monitoring of the coverage and distribution of photosynthetic (PV) and non-photosynthetic vegetation (NPV) in the grasslands of semi-arid regions is crucial for understanding the environment and addressing climate change. However, the extraction of PV and NPV information from Unmanned Aerial Vehicle (UAV) [...] Read more.
Accurate monitoring of the coverage and distribution of photosynthetic (PV) and non-photosynthetic vegetation (NPV) in the grasslands of semi-arid regions is crucial for understanding the environment and addressing climate change. However, the extraction of PV and NPV information from Unmanned Aerial Vehicle (UAV) remote sensing imagery is often hindered by challenges such as low extraction accuracy and blurred boundaries. To overcome these limitations, this study proposed an improved semantic segmentation model, designated SegFormer-CPED. The model was developed based on the SegFormer architecture, incorporating several synergistic optimizations. Specifically, a Convolutional Block Attention Module (CBAM) was integrated into the encoder to enhance early-stage feature perception, while a Polarized Self-Attention (PSA) module was embedded to strengthen contextual understanding and mitigate semantic loss. An Edge Contour Extraction Module (ECEM) was introduced to refine boundary details. Concurrently, the Dice Loss function was employed to replace the Cross-Entropy Loss, thereby more effectively addressing the class imbalance issue and significantly improving both the segmentation accuracy and boundary clarity of PV and NPV. To support model development, a high-quality PV and NPV segmentation dataset for Hengshan grassland was also constructed. Comprehensive experimental results demonstrated that the proposed SegFormer-CPED model achieved state-of-the-art performance, with a mIoU of 93.26% and an F1-score of 96.44%. It significantly outperformed classic architectures and surpassed all leading frameworks benchmarked here. Its high-fidelity maps can bridge field surveys and satellite remote sensing. Ablation studies verified the effectiveness of each improved module and its synergistic interplay. Moreover, this study successfully utilized SegFormer-CPED to perform fine-grained monitoring of the spatiotemporal dynamics of PV and NPV in the Hengshan grassland, confirming that the model-estimated fPV and fNPV were highly correlated with ground survey data. The proposed SegFormer-CPED model provides a robust and effective solution for the precise, semi-automated extraction of PV and NPV from high-resolution UAV imagery. Full article
Show Figures

Graphical abstract

29 pages, 5213 KB  
Article
Design and Implementation of a Novel Intelligent Remote Calibration System Based on Edge Intelligence
by Quan Wang, Jiliang Fu, Xia Han, Xiaodong Yin, Jun Zhang, Xin Qi and Xuerui Zhang
Symmetry 2025, 17(9), 1434; https://doi.org/10.3390/sym17091434 - 3 Sep 2025
Viewed by 652
Abstract
Calibration of power equipment has become an essential task in modern power systems. This paper proposes a distributed remote calibration prototype based on a cloud–edge–end architecture by integrating intelligent sensing, Internet of Things (IoT) communication, and edge computing technologies. The prototype employs a [...] Read more.
Calibration of power equipment has become an essential task in modern power systems. This paper proposes a distributed remote calibration prototype based on a cloud–edge–end architecture by integrating intelligent sensing, Internet of Things (IoT) communication, and edge computing technologies. The prototype employs a high-precision frequency-to-voltage conversion module leveraging satellite signals to address traceability and value transmission challenges in remote calibration, thereby ensuring reliability and stability throughout the process. Additionally, an environmental monitoring module tracks parameters such as temperature, humidity, and electromagnetic interference. Combined with video surveillance and optical character recognition (OCR), this enables intelligent, end-to-end recording and automated data extraction during calibration. Furthermore, a cloud-edge task scheduling algorithm is implemented to offload computational tasks to edge nodes, maximizing resource utilization within the cloud–edge collaborative system and enhancing service quality. The proposed prototype extends existing cloud–edge collaboration frameworks by incorporating calibration instruments and sensing devices into the network, thereby improving the intelligence and accuracy of remote calibration across multiple layers. Furthermore, this approach facilitates synchronized communication and calibration operations across symmetrically deployed remote facilities and reference devices, providing solid technical support to ensure that measurement equipment meets the required precision and performance criteria. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

36 pages, 40569 KB  
Article
Deep Learning Approaches for Fault Detection in Subsea Oil and Gas Pipelines: A Focus on Leak Detection Using Visual Data
by Viviane F. da Silva, Theodoro A. Netto and Bessie A. Ribeiro
J. Mar. Sci. Eng. 2025, 13(9), 1683; https://doi.org/10.3390/jmse13091683 - 1 Sep 2025
Viewed by 839
Abstract
The integrity of subsea oil and gas pipelines is essential for offshore safety and environmental protection. Conventional leak detection approaches, such as manual inspection and indirect sensing, are often costly, time-consuming, and prone to subjectivity, motivating the development of automated methods. In this [...] Read more.
The integrity of subsea oil and gas pipelines is essential for offshore safety and environmental protection. Conventional leak detection approaches, such as manual inspection and indirect sensing, are often costly, time-consuming, and prone to subjectivity, motivating the development of automated methods. In this study, we present a deep learning-based framework for detecting underwater leaks using images acquired in controlled experiments designed to reproduce representative conditions of subsea monitoring. The dataset was generated by simulating both gas and liquid leaks in a water tank environment, under scenarios that mimic challenges observed during Remotely Operated Vehicle (ROV) inspections along the Brazilian coast. It was further complemented with artificially generated synthetic images (Stable Diffusion) and publicly available subsea imagery. Multiple Convolutional Neural Network (CNN) architectures, including VGG16, ResNet50, InceptionV3, DenseNet121, InceptionResNetV2, EfficientNetB0, and a lightweight custom CNN, were trained with transfer learning and evaluated on validation and blind test sets. The best-performing models achieved stable performance during training and validation, with macro F1-scores above 0.80, and demonstrated improved generalization compared to traditional baselines such as VGG16. In blind testing, InceptionV3 achieved the most balanced performance across the three classes when trained with synthetic data and augmentation. The study demonstrates the feasibility of applying CNNs for vision-based leak detection in complex underwater environments. A key contribution is the release of a novel experimentally generated dataset, which supports reproducibility and establishes a benchmark for advancing automated subsea inspection methods. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop