Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (198)

Search Parameters:
Keywords = MobileFaceNet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 12308 KB  
Article
An Improved MSEM-Deeplabv3+ Method for Intelligent Detection of Rock Mass Fractures
by Chi Zhang, Shu Gan, Xiping Yuan, Weidong Luo, Chong Ma and Yi Li
Remote Sens. 2026, 18(7), 1041; https://doi.org/10.3390/rs18071041 - 30 Mar 2026
Viewed by 205
Abstract
Fractures as critical discontinuous structural planes in rock masses, directly govern their stability and serve as the core controlling factor in rock mechanics engineering. Existing deep learning models for fracture extraction face persistent challenges, including imbalanced integration of deep and shallow features, limited [...] Read more.
Fractures as critical discontinuous structural planes in rock masses, directly govern their stability and serve as the core controlling factor in rock mechanics engineering. Existing deep learning models for fracture extraction face persistent challenges, including imbalanced integration of deep and shallow features, limited suppression of background noise, inadequate multi-scale feature representation, and large parameter sizes—making it difficult to strike a balance between detection accuracy and deployment efficiency. Focusing on the Wanshanshan quarry in Yunnan, this study first constructs a high-precision digital model using close-range photogrammetry and 3D real-scene reconstruction. A lightweight yet high-accuracy intelligent detection method, termed MSEM-Deeplabv3+, is then proposed for rock mass fracture extraction. The model adopts lightweight MobileNetV2 as the backbone network, incorporating inverted residual modules and depthwise separable convolutions, resulting in a parameter size of only 6.02 MB and FLOPs of 30.170 G—substantially reducing computational overhead. Furthermore, the proposed MAGF (Multi-Scale Attention Gated Fusion) and SCSA (Spatial-Channel Synergistic Attention) modules are integrated to enhance the representation of fracture details and semantic consistency while effectively suppressing multi-source and multi-scale background interference. Experimental results demonstrate that the proposed model achieves an mPA of 89.69%, mIoU of 83.71%, F1-Score of 90.41%, and Kappa coefficient of 80.81%, outperforming the classic Deeplabv3+ model by 5.81%, 6.18%, 4.53%, and 9.2%, respectively. It also significantly surpasses benchmark models such as U-Net and HRNet. The method accurately captures fine and continuous fracture details, preserves the spatial distribution of long-range continuous fractures, and maintains robust performance on the CFD cross-scene dataset, showcasing strong adaptability and generalization capability. This approach effectively mitigates the risks associated with manual high-altitude inspections and provides a lightweight, high-precision, non-contact intelligent solution for fracture detection in high-steep rock slopes. Full article
Show Figures

Figure 1

22 pages, 4435 KB  
Article
Semantic Mapping in Public Indoor Environments Using Improved Instance Segmentation and Continuous-Frame Dynamic Constraint
by Yumin Lu, Xueyu Feng, Zonghuan Guo, Jianchao Wang, Lin Zhou and Yingcheng Lin
Electronics 2026, 15(7), 1392; https://doi.org/10.3390/electronics15071392 - 26 Mar 2026
Viewed by 312
Abstract
Reliable semantic perception is crucial for service robots operating in complex public indoor environments. However, existing semantic mapping approaches often face the dual challenges of high computational overhead and semantic redundancy in maps. To address these limitations, this paper proposes a low-resource semantic [...] Read more.
Reliable semantic perception is crucial for service robots operating in complex public indoor environments. However, existing semantic mapping approaches often face the dual challenges of high computational overhead and semantic redundancy in maps. To address these limitations, this paper proposes a low-resource semantic mapping framework based on improved instance segmentation and dynamic constraints from consecutive frames. First, we design the lightweight model MS-YOLO, which adopts MobileNetV4 as its backbone network and incorporates the SHViT neck module, effectively optimizing the balance between detection accuracy and computational cost. Second, we propose a consecutive frame dynamic constraint method that eliminates redundant object annotations through consecutive frame stability verification. Experimental results relating to both fusion and custom datasets demonstrate that compared to YOLOv8n-seg, MS-YOLO achieves improvements in accuracy, recall, and mAP@0.5, while reducing the number of parameters by 11.7% and floating-point operations (FLOPs) by 32.2%. Furthermore, compared to YOLOv11n-seg and YOLOv5n-seg, its FLOPs are reduced by 17.2% and 25.5%, respectively. Finally, the successful deployment and field validation of this system on the Jetson Orin NX platform demonstrate its real-time capability and engineering practicality for edge computing in public indoor service robots. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

40 pages, 2214 KB  
Article
A CNN-ViT Hybrid Architecture Res101-MViT-Ens for Accurate and Lightweight Automated Ocular Disease Diagnosis
by Hao Wang, Ting Ke and Hui Lv
Appl. Sci. 2026, 16(6), 2905; https://doi.org/10.3390/app16062905 - 18 Mar 2026
Viewed by 228
Abstract
Automated ocular disease diagnosis faces critical challenges including insufficient diagnostic precision, local–global feature imbalance, rigid feature fusion, weak cross-domain generalization, and difficult lightweight deployment. This study aims to develop a high-performance, generalizable, and deployable hybrid deep learning architecture for accurate multi-class ocular disease [...] Read more.
Automated ocular disease diagnosis faces critical challenges including insufficient diagnostic precision, local–global feature imbalance, rigid feature fusion, weak cross-domain generalization, and difficult lightweight deployment. This study aims to develop a high-performance, generalizable, and deployable hybrid deep learning architecture for accurate multi-class ocular disease diagnosis. We propose the Res101-MViT-Ens hybrid architecture, which fuses ResNet101 for local fine-grained feature extraction and MobileViT-XXS for global contextual modeling via an end-to-end dynamic learnable weight fusion mechanism, with class-balanced sampling and medically adaptive augmentation for data preprocessing. The model is validated on the ODIR-5K dataset and cross-evaluated on three heterogeneous datasets (MESSIDOR-2, Kaggle DR, EyePACS). It achieves 99.44% accuracy, a 99.41% F1-score, and 99.32% Kappa on ODIR-5K, with a 99.46% average cross-dataset accuracy, outperforming state-of-the-art models. With 54 M parameters and 42.6 ms per-image inference latency on the Snapdragon 8 Gen2 edge module (Qualcomm Technologies, Inc., San Diego, CA, USA), it outperforms mainstream edge architectures. This proposed architecture achieves state-of-the-art diagnostic precision; balances accuracy, generalization and practicality; and is suitable for lightweight grassroots deployment in ocular disease screening. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 33281 KB  
Article
FLF-RCNN: A Fine-Tuned Lightweight Faster RCNN for Precise and Efficient Industrial Quality Inspection
by Ningli An, Zhichao Yang, Liangliang Wan, Jianan Li and Yiming Wang
Sensors 2026, 26(6), 1768; https://doi.org/10.3390/s26061768 - 11 Mar 2026
Viewed by 344
Abstract
Industrial Quality Inspection (IQI) is a pivotal part of intelligent manufacturing, critical to ensuring product quality. Deep learning-based methods have attracted growing attention for their excellent feature extraction ability, outperforming traditional detection approaches. However, existing methods still face issues of insufficient efficiency and [...] Read more.
Industrial Quality Inspection (IQI) is a pivotal part of intelligent manufacturing, critical to ensuring product quality. Deep learning-based methods have attracted growing attention for their excellent feature extraction ability, outperforming traditional detection approaches. However, existing methods still face issues of insufficient efficiency and poor transferability, and this paper proposes a Fine-tuned Lightweight Faster RCNN (FLF-RCNN) framework designed to address key challenges in IQI, including the trade-off between accuracy and computational efficiency, and the insufficient adaptability of preset anchor box ratios. FLF-RCNN introduces a lightweight backbone network, LSNet, which enhances the receptive field through architectural optimization. Specifically, it uses a collaborative mechanism that combines large kernel convolutions for extracting contextual information and small kernel convolutions for capturing fine-grained details. This mechanism enables the model to efficiently and precisely represent defects. To enhance generalization in data-scarce industrial scenarios, the framework leverages transfer learning with pretrained weights. Furthermore, an Adaptive Anchor Box-Adjustment Module (AAB-AM) based on K-means clustering is introduced to improve detection across varied defect scales. Extensive experiments conducted on the Tianchi dataset show that FLF-RCNN achieves a mAP50 of 43.6%, outperforming detectors using MobileNet and EfficientNet backbones and surpassing the baseline Faster R-CNN by 7.9% in mAP50. Meanwhile, the proposed method reduces computational complexity by approximately 40%, reaching 98.65 GFLOPs, and decreases parameter count by around 30% to 28.2M. These results demonstrate that FLF-RCNN offers a feasibility and practical solution for IQI, achieving a superior accuracy-efficiency balance within the two-stage detection paradigm. Full article
Show Figures

Figure 1

15 pages, 1290 KB  
Article
Efficient Deep Learning-Based M-PSK Detection for OFDM V2V Systems Using MobileNetV3
by Luis E. Tonix-Gleason, José A. Del-Puerto-Flores, Fernando Peña-Campos, Dunstano del Puerto-Flores, Juan-Carlos López-Pimentel, Carolina Del-Valle-Soto and Luis René Vela-Garcia
Algorithms 2026, 19(3), 210; https://doi.org/10.3390/a19030210 - 11 Mar 2026
Viewed by 285
Abstract
This paper investigates M-PSK symbol detection in Orthogonal Frequency Division Multiplexing (OFDM) systems for wideband Vehicle-to-Vehicle (V2V) communications using lightweight convolutional neural networks. In doubly dispersive channels, Inter-Carrier Interference (ICI) degrades subcarrier orthogonality, rendering conventional equalization ineffective. Current ICI mitigation techniques face a [...] Read more.
This paper investigates M-PSK symbol detection in Orthogonal Frequency Division Multiplexing (OFDM) systems for wideband Vehicle-to-Vehicle (V2V) communications using lightweight convolutional neural networks. In doubly dispersive channels, Inter-Carrier Interference (ICI) degrades subcarrier orthogonality, rendering conventional equalization ineffective. Current ICI mitigation techniques face a trade-off between Bit-Error Rate (BER) performance and computational complexity, limiting their applicability in dynamic vehicular scenarios. To address this issue, a low-complexity MobileNetV3-based receiver is proposed, incorporating a signal-model-driven preprocessing stage that compensates for Doppler-induced phase distortions responsible for ICI. Simulation results show that the proposed receiver improves BER performance compared to conventional equalizers and recent neural-based schemes in the low-SNR regime (below 15 dB) while maintaining computational complexity close to linear least-squares detection. Full article
Show Figures

Figure 1

22 pages, 3598 KB  
Article
Fractional Tchebichef-ResNet-SE: A Hybrid Deep Learning Framework Integrating Fractional Tchebichef Moments with Attention Mechanisms for Enhanced IoT Intrusion Detection
by Islam S. Fathi, Ahmed R. El-Saeed, Mohammed Tawfik and Gaber Hassan
Fractal Fract. 2026, 10(3), 172; https://doi.org/10.3390/fractalfract10030172 - 5 Mar 2026
Viewed by 284
Abstract
The Internet of Things (IoT) faces critical security challenges stemming from resource-constrained devices and inadequate intrusion detection capabilities. Traditional machine learning approaches struggle with high-dimensional network traffic data due to the curse of dimensionality, severe class imbalance between benign and malicious traffic, and [...] Read more.
The Internet of Things (IoT) faces critical security challenges stemming from resource-constrained devices and inadequate intrusion detection capabilities. Traditional machine learning approaches struggle with high-dimensional network traffic data due to the curse of dimensionality, severe class imbalance between benign and malicious traffic, and dependence on manual feature engineering that fails to capture complex non-linear attack patterns. Although deep neural networks offer automatic feature extraction, they suffer from two fundamental limitations: the degradation problem, where increasing network depth paradoxically raises training error rather than improving performance, and uniform channel weighting, which prevents the network from adaptively emphasizing attack-relevant features while suppressing irrelevant noise. This research proposes a novel hybrid framework integrating Fractional Tchebichef moment-based feature preprocessing with deep Residual Networks enhanced by Squeeze-and-Excitation (ResNet-SE) attention mechanisms. Fractional Tchebichef moments provide compact, noise-resistant representations by operating directly in the discrete domain, eliminating discretization errors inherent in continuous moment approaches. Network traffic features are transformed into 232 × 232 moment-based matrices capturing discriminative patterns across multiple scales. Comprehensive evaluation on Bot-IoT and Leopard Mobile IoT datasets demonstrates superior performance, achieving 99.78% accuracy and a 99.37% F1-score, substantially outperforming K-Nearest Neighbors (84.7%), Support Vector Machines (87.5%), and baseline CNNs (99.3%). Ablation studies confirm synergistic contributions, with residual connections contributing 0.18% and SE attention adding 0.14% improvements. Cross-dataset evaluation achieves 96.34% and 97.12% accuracy on UNSW-NB15 and IoT-Bot datasets without retraining, while the framework processes 127.9 samples per second across diverse attack taxonomies. Full article
(This article belongs to the Section Optimization, Big Data, and AI/ML)
Show Figures

Figure 1

12 pages, 1348 KB  
Proceeding Paper
LDDm-YOLO: A Distilled YOLOv8 Model for Efficient Real-Time UAV Detection on Edge Devices
by Maryam Lawan Salisu and Aminu Musa
Eng. Proc. 2026, 124(1), 68; https://doi.org/10.3390/engproc2026124068 - 4 Mar 2026
Viewed by 291
Abstract
Lightweight deep-learning models, including MobileNet and LDDm-CNN, have demonstrated significant potential for distinguishing drones from other aerial objects, making them well suited for deployment in resource-constrained environments. However, classification-based approaches face inherent limitations for real-time surveillance, as they rely on prior object cropping [...] Read more.
Lightweight deep-learning models, including MobileNet and LDDm-CNN, have demonstrated significant potential for distinguishing drones from other aerial objects, making them well suited for deployment in resource-constrained environments. However, classification-based approaches face inherent limitations for real-time surveillance, as they rely on prior object cropping or manual region-of-interest extraction and lack the capability to localize drones directly within a complex scene. This limitation significantly restricts their applicability and effectiveness in dynamic and safety-critical environments such as airspace monitoring and critical infrastructure protection, where both recognition and spatial localization are crucial. To address this gap, we proposed LDDm-YOLO, which uses the YOLO-v8n as a compact feature extractor and integrates a lightweight, anchor-free detection head with a shallow feature pyramid for multi-scale object localization. We employed knowledge distillation to transfer rich spatial and semantic features from a larger teacher detector (YOLO-V8x), while incorporating Bayesian optimization for hyperparameter tuning. All experiments were conducted on the Google Colab platform with NVIDIA T4 GPU. The proposed LDDm-YOLO achieves competitive mean Average Precision (mAP = 0.96), Precision 0.92, Recall 0.94, and 127.06 FPS, retaining a smaller model size of only 6.25 MB and low computational complexity (8.9 GFLOPs). These results indicate the potential of the proposed model for edge device deployment. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

21 pages, 741 KB  
Article
Governing Collaborative Technological Innovation for Net-Zero Transition in Micro-Jurisdictions: Evidence from Macao’s New Qualitative Productivity Framework
by Bowen Chen, Xiaoyu Wei, Shenghua Lou, Hongfeng Zhang, Iek Hang Ngan and Kei Un Wong
Sustainability 2026, 18(3), 1509; https://doi.org/10.3390/su18031509 - 2 Feb 2026
Viewed by 459
Abstract
Against the backdrop of China’s dual-carbon goals and the global push toward net-zero emissions, Macao faces not only an innovation deficit but also the urgent need to reconfigure its economic structure toward green and low-carbon development. This study investigates collaborative innovation mechanisms within [...] Read more.
Against the backdrop of China’s dual-carbon goals and the global push toward net-zero emissions, Macao faces not only an innovation deficit but also the urgent need to reconfigure its economic structure toward green and low-carbon development. This study investigates collaborative innovation mechanisms within Macao’s technological ecosystem through the lens of new qualitative productivity, a paradigm emphasizing structural optimization and systemic innovation capacity. As a micro-jurisdiction within the Guangdong–Hong Kong–Macao Greater Bay Area (GBA), Macao faces challenges due to its tourism-dependent economy and spatial constraints. Employing a qualitative methodology grounded in collaborative governance theory, the research combines theoretical framework construction with empirical case studies of technology enterprises, notably Enterprise B, to analyze stakeholder interactions, resource integration, and institutional dynamics. This study examines how collaborative technological innovation governance in a micro-jurisdiction can underpin net-zero and green supply chain transitions by mobilizing cross-border resources and institutional synergies. Key findings reveal a polycentric governance model involving government, enterprises, academic institutions, and civil society organizations. This model leverages cross-border synergies, platformization, and adaptive recalibration to overcome structural limitations. Results highlight tripartite drivers—policy incentives, market forces, and corporate strategies—that enhance innovation throughput. Despite advancements in institutional coordination, challenges persist, including low enterprise absorption of government funding, talent attrition, and fragmented academic–industrial linkages. The study proposes strategic recalibrations, such as refining policy architectures, strengthening industry–academia–research symbiosis, and optimizing transnational collaboration through Macao’s Lusophone networks. The findings provide governance insights for micro-jurisdictions seeking to align new qualitative productivity with decarbonization, renewable energy integration, and participation in regional green supply chains. Full article
Show Figures

Figure 1

22 pages, 122928 KB  
Article
GD-DAMNet: Real-Time UAV-Based Overhead Power-Line Presence Recognition Using a Lightweight Knowledge Distillation with Mamba-GhostNet v2 and Dual-Attention
by Shuyu Sun, Yingnan Xiao, Gaoping Li, Yuyan Wang, Ying Tan, Jundong Xie and Yifan Liu
Entropy 2026, 28(2), 166; https://doi.org/10.3390/e28020166 - 31 Jan 2026
Viewed by 410
Abstract
Power-line presence recognition technology for unmanned aerial vehicles (UAVs) is one of the key research directions in the field of UAV remote sensing. With the rapid development of UAV technology, the application of UAVs in various fields has become increasingly widespread. However, when [...] Read more.
Power-line presence recognition technology for unmanned aerial vehicles (UAVs) is one of the key research directions in the field of UAV remote sensing. With the rapid development of UAV technology, the application of UAVs in various fields has become increasingly widespread. However, when flying in urban and rural areas, UAVs often face the danger of obstacles such as power lines, posing challenges to flight safety and stability. To address this issue, this study proposes a novel method for presence recognition in UAVs for power lines using an improved GhostNet v2 knowledge distillation dual-attention mechanism convolutional neural network. The construction of a real-time UAV power-line presence recognition system involves three aspects: dataset acquisition, a novel network model, and real-time presence recognition. First, by cleaning, enhancing, and segmenting the power-line data collected by UAVs, a UAV power-line presence recognition image dataset is obtained. Second, through comparative experiments with multi-attention modules, the dual-attention mechanism is selected to construct the CNN, and the UAV real-time power-line presence recognition training is conducted using the SGD optimiser and Hard-Swish activation function. Finally, knowledge distillation is employed to transfer the knowledge from the dual-attention mechanism-based CNN to the nonlinear function and Mamba-enhanced GhostNet v2 network, thereby reducing the model’s parameter count and achieving real-time recognition performance suitable for mobile device deployment. Experiments demonstrate that the UAV-based real-time power-line presence recognition method proposed in this paper achieves real-time recognition accuracy rates of over 91.4% across all regions, providing a technical foundation for advancing the development and progress of UAV-based real-time power-line presence recognition. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

25 pages, 1921 KB  
Article
Transfer Learning-Based Ethnicity Recognition Using Arbitrary Images Captured Through Diverse Imaging Sensors
by Hasti Soudbakhsh, Sonjoy Ranjon Das, Bilal Hassan and Muhammad Farooq Wasiq
Sensors 2026, 26(3), 886; https://doi.org/10.3390/s26030886 - 29 Jan 2026
Viewed by 386
Abstract
Ethnicity recognition has become increasingly important for a wide range of applications, highlighting the need for accurate and robust predictive models. Despite advances in machine learning, ethnicity classification remains a challenging research problem due to variations in facial features, class imbalance, and generalization [...] Read more.
Ethnicity recognition has become increasingly important for a wide range of applications, highlighting the need for accurate and robust predictive models. Despite advances in machine learning, ethnicity classification remains a challenging research problem due to variations in facial features, class imbalance, and generalization issues. This study provides a concise synthesis of prior work to motivate the problem and then introduces a novel experimental framework for ethnicity recognition rather than a survey review. It proposes an improved approach that leverages transfer learning to enhance classification performance. The inclusion of various imaging sensors in the proposed methodology allows for an examination of how these imaging sensors impact the performance of facial recognition systems when a variety of images are captured under a number of real-world conditions, using professional and consumer-grade devices to create a range of conditions; from this dataset, the UTKFace dataset will be used to train and validate our method; an additional balanced dataset of Test Celebrities Faces was also created, representing five different ethnic groups (Black, Asian, White, Indian, and Other); the “Other” classification was specifically excluded for final evaluations to eliminate ambiguity and enhance stability. Rigorous preprocessing of both datasets was performed for optimal extraction of features from the sensors’ acquired images; the performance of several pre-trained CNN (Convolutional Neural Network) models (VGG16, DenseNet169, VGG19, ResNet50, MobileNetV2, InceptionV3 and EfficientNetB4) was used to identify an Ideal Hyperparameter Configuration for Optimal Performance. The resulting experimental results indicate that the VGG19 model achieved an 87% validation accuracy and a Maximum test accuracy of 75% on the Primary Dataset of Celebrity Faces; subsequently, the VGG19 model demonstrated a Range of Per-Class Accuracies, in addition to an overall accuracy of 87% across all five ethnic groups (51–90%+). This work demonstrates that leveraging transfer learning on imaging-sensor-captured images enables robust ethnicity classification with high accuracy and improved training efficiency relative to full model retraining. Furthermore, systematic hyperparameter optimization enhances model generalization and mitigates overfitting. Comparative experiments with recent state-of-the-art methods (2023–2025) further confirm that our optimized VGG19 model achieves competitive performance, reinforcing the effectiveness of the proposed reproducible and fairness-aware evaluation framework. Full article
(This article belongs to the Special Issue Deep Learning Based Face Recognition and Feature Extraction)
Show Figures

Figure 1

11 pages, 361 KB  
Brief Report
The Strategic Advantage of FQHCs in Implementing Mobile Health Units: Lessons Learned from a Pilot Initiative
by Lauren Bifulco, Anna Rogers, Cecilia Hackerson, Marwan S. Haddad, April Joy Damian and Kathleen Harding
Int. J. Environ. Res. Public Health 2026, 23(2), 158; https://doi.org/10.3390/ijerph23020158 - 27 Jan 2026
Viewed by 624
Abstract
High-need populations face substantive barriers to accessing primary care, leading to disproportionately poor health outcomes. This descriptive, observational study details the implementation of a Federally Qualified Health Center (FQHC) program designed to improve engagement in care and enabling services by leveraging mobile health [...] Read more.
High-need populations face substantive barriers to accessing primary care, leading to disproportionately poor health outcomes. This descriptive, observational study details the implementation of a Federally Qualified Health Center (FQHC) program designed to improve engagement in care and enabling services by leveraging mobile health units (MHUs) to provide comprehensive, low-barrier primary care services to residents who were previously unable or unwilling to engage with the traditional healthcare system. The program sought to overcome common access challenges such as lack of transportation, lack of insurance, and mistrust of healthcare institutions. We describe the operational framework of this program, examine the types of care delivered, and offer recommendations from the perspective of a large multi-site FQHC experienced in reengaging people back to the healthcare system but new to providing mobile health care. We describe our program’s focus on prioritizing patient engagement and access and its consideration of operational and technical infrastructure. Based on our FQHC’s experience, we provide recommendations on how to address patients’ health and social needs. FQHCs have the potential to implement MHUs, drawing on their existing infrastructure and community relationships. Our MHU program is well-aligned with our FQHC’s commitment and priority to deliver essential care and foster continuity within hard-to-reach communities, strengthening the local healthcare safety net and improving healthcare for high-need populations. Full article
(This article belongs to the Special Issue Advances and Trends in Mobile Healthcare)
Show Figures

Graphical abstract

16 pages, 12168 KB  
Article
Real-Time Segmentation of Tactile Paving and Zebra Crossings for Visually Impaired Assistance Using Embedded Visual Sensors
by Yiqiang Jiang, Shicheng Yan and Jianyang Liu
Sensors 2026, 26(3), 770; https://doi.org/10.3390/s26030770 - 23 Jan 2026
Viewed by 334
Abstract
This study aims to address the safety and mobility challenges faced by visually impaired individuals. To this end, a lightweight, high-precision semantic segmentation network is proposed for scenes containing tactile paving and zebra crossings. The network is successfully deployed on an intelligent guide [...] Read more.
This study aims to address the safety and mobility challenges faced by visually impaired individuals. To this end, a lightweight, high-precision semantic segmentation network is proposed for scenes containing tactile paving and zebra crossings. The network is successfully deployed on an intelligent guide robot equipped with a high-definition camera and a Huawei Atlas 310 embedded computing platform. To enhance both real-time performance and segmentation accuracy on resource-constrained devices, an improved G-GhostNet backbone is designed for feature extraction. Specifically, it is combined with a depthwise separable convolution-based Coordinate Attention module and a redesigned Atrous Spatial Pyramid Pooling (ASPP) module to capture multi-scale contextual features. A dedicated decoder efficiently fuses multi-level features to refine segmentation of tactile paving and zebra crossings. Experimental results demonstrate that the proposed model achieves mPA of 97% and 93%, mIoU of 94% and 86% for tactile paving and zebra crossing segmentation, respectively, with an inference speed of 59.2 fps. These results significantly outperform several mainstream semantic segmentation networks, validating the effectiveness and practical value of the proposed method in embedded systems for visually impaired travel assistance. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

33 pages, 5188 KB  
Article
Geometric Feature Enhancement for Robust Facial Landmark Detection in Makeup Paper Templates
by Cheng Chang, Yong-Yi Fanjiang and Chi-Huang Hung
Appl. Sci. 2026, 16(2), 977; https://doi.org/10.3390/app16020977 - 18 Jan 2026
Viewed by 723
Abstract
Traditional scoring of makeup face templates in beauty skill assessments heavily relies on manual judgment, leading to inconsistencies and subjective bias. Hand-drawn templates often exhibit proportion distortions, asymmetry, and occlusions that reduce the accuracy of conventional facial landmark detection algorithms. This study proposes [...] Read more.
Traditional scoring of makeup face templates in beauty skill assessments heavily relies on manual judgment, leading to inconsistencies and subjective bias. Hand-drawn templates often exhibit proportion distortions, asymmetry, and occlusions that reduce the accuracy of conventional facial landmark detection algorithms. This study proposes a novel approach that integrates Geometric Feature Enhancement (GFE) with Dlib’s 68-landmark detection to improve the robustness and precision of landmark localization. A comprehensive comparison among Haar Cascade, MTCNN-MobileNetV2, and Dlib was conducted using a curated dataset of 11,600 hand-drawn facial templates. The proposed GFE-enhanced Dlib achieved 60.5% accuracy—outperforming MTCNN (23.4%) and Haar (20.3%) by approximately 37 percentage points, with precision and F1-score improvements exceeding 20% and 25%, respectively. The results demonstrate that the proposed method significantly enhances detection accuracy and scoring consistency, providing a reliable framework for automated beauty skill evaluation, and laying a solid foundation for future applications such as digital archiving and style-guided synthesis. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Digital Image Processing)
Show Figures

Figure 1

23 pages, 8263 KB  
Article
Uncertainty-Aware Deep Learning for Sugarcane Leaf Disease Detection Using Monte Carlo Dropout and MobileNetV3
by Pathmanaban Pugazhendi, Chetan M. Badgujar, Madasamy Raja Ganapathy and Manikandan Arumugam
AgriEngineering 2026, 8(1), 31; https://doi.org/10.3390/agriengineering8010031 - 16 Jan 2026
Viewed by 747
Abstract
Sugarcane diseases cause estimated global annual losses of over $5 billion. While deep learning shows promise for disease detection, current approaches lack transparency and confidence estimates, limiting their adoption by agricultural stakeholders. We developed an uncertainty-aware detection system integrating Monte Carlo (MC) dropout [...] Read more.
Sugarcane diseases cause estimated global annual losses of over $5 billion. While deep learning shows promise for disease detection, current approaches lack transparency and confidence estimates, limiting their adoption by agricultural stakeholders. We developed an uncertainty-aware detection system integrating Monte Carlo (MC) dropout with MobileNetV3, trained on 2521 images across five categories: Healthy, Mosaic, Red Rot, Rust, and Yellow. The proposed framework achieved 97.23% accuracy with a lightweight architecture comprising 5.4 M parameters. It enabled a 2.3 s inference while generating well-calibrated uncertainty estimates that were 4.0 times higher for misclassifications. High-confidence predictions (>70%) achieved 98.2% accuracy. Gradient-weighted Class Activation Mapping provided interpretable disease localization, and the system was deployed on Hugging Face Spaces for global accessibility. The model demonstrated high recall for the Healthy and Red Rot classes. The model achieved comparatively higher recall for the Healthy and Red Rot classes. The inclusion of uncertainty quantification provides additional information that may support more informed decision-making in precision agriculture applications involving farmers and agronomists. Full article
Show Figures

Figure 1

19 pages, 924 KB  
Article
Navigating Climate Neutrality Planning: How Mobility Management May Support Integrated University Strategy Development, the Case Study of Genoa
by Ilaria Delponte and Valentina Costa
Future Transp. 2026, 6(1), 19; https://doi.org/10.3390/futuretransp6010019 - 15 Jan 2026
Viewed by 368
Abstract
Higher education institutions face a critical methodological challenge in pursuing net-zero commitments: Within the amount ofhe emissions related to Scope 3, including indirect emissions from water consumption, waste disposal, business travel, and mobility, employees commuting represents 50–92% of campus carbon footprints, yet reliable [...] Read more.
Higher education institutions face a critical methodological challenge in pursuing net-zero commitments: Within the amount ofhe emissions related to Scope 3, including indirect emissions from water consumption, waste disposal, business travel, and mobility, employees commuting represents 50–92% of campus carbon footprints, yet reliable quantification remains elusive due to fragmented data collection and governance silos. The present research investigates how purposeful integration of the Home-to-Work Commuting Plan (HtWCP)—mandatory under Italian Decree 179/2021—into the Climate Neutrality Plan (CNP) could constitute an innovative strategy to enhance emissions accounting rigor while strengthening institutional governance. Stemming from the University of Genoa case study, we show how leveraging mandatory HtWCP survey infrastructure to collect granular mobility behavioral data (transportation mode, commuting distance, and travel frequency) directly addresses the GHG Protocol-specified distance-based methodology for Scope 3 accounting. In turn, the CNP could support the HtWCP in framing mobility actions into a wider long-term perspective, as well as suggesting a compensation mechanism and paradigm for mobility actions that are currently not included. We therefore establish a replicable model that simultaneously advances three institutional dimensions, through the operationalization of the Avoid–Shift–Improve framework within an integrated workflow: (1) methodological rigor—replacing proxy methodologies with actual behavioral data to eliminate the notorious Scope 3 data gap; (2) governance coherence—aligning voluntary and regulatory instruments to reduce fragmentation and enhance cross-functional collaboration; and (3) adaptive management—embedding biennial feedback cycles that enable continuous validation and iterative refinement of emissions reduction strategies. This framework positions universities as institutional innovators capable of modeling integrated governance approaches with potential transferability to municipal, corporate, and public administration contexts. The findings contribute novel evidence to scholarly literature on institutional sustainability, policy integration, and climate governance, whilst establishing methodological standards relevant to international harmonization efforts in carbon accounting. Full article
Show Figures

Figure 1

Back to TopTop