Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,367)

Search Parameters:
Keywords = granular model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 9177 KB  
Article
Understanding Physiological Responses for Intelligent Posture and Autonomic Response Detection Using Wearable Technology
by Chaitanya Vardhini Anumula, Tanvi Banerjee and William Lee Romine
Algorithms 2025, 18(9), 570; https://doi.org/10.3390/a18090570 - 10 Sep 2025
Abstract
This study investigates how Iyengar yoga postures influence autonomic nervous system (ANS) activity by analyzing multimodal physiological signals collected via wearable sensors. The goal was to explore whether subtle postural variations elicit measurable autonomic responses and to identify which sensor features most effectively [...] Read more.
This study investigates how Iyengar yoga postures influence autonomic nervous system (ANS) activity by analyzing multimodal physiological signals collected via wearable sensors. The goal was to explore whether subtle postural variations elicit measurable autonomic responses and to identify which sensor features most effectively capture these changes. Participants performed a sequence of yoga poses while wearing synchronized sensors measuring electrodermal activity (EDA), heart rate variability, skin temperature, and motion. Interpretable machine learning models, including linear classifiers, were trained to distinguish physiological states and rank feature relevance. The results revealed that even minor postural adjustments led to significant shifts in ANS markers, with phasic EDA and RR interval features showing heightened sensitivity. Surprisingly, micro-movements captured via accelerometry and transient electrodermal reactivity, specifically EDA peak-to-RMS ratios, emerged as dominant contributors to classification performance. These findings suggest that small-scale kinematic and autonomic shifts, which are often overlooked, play a central role in the physiological effects of yoga. The study demonstrates that wearable sensor analytics can decode a more nuanced and granular physiological profile of mind–body practices than traditionally appreciated, offering a foundation for precision-tailored biofeedback systems and advancing objective approaches to yoga-based interventions. Full article
Show Figures

Figure 1

22 pages, 3520 KB  
Article
A Deep Learning–Random Forest Hybrid Model for Predicting Historical Temperature Variations Driven by Air Pollution: Methodological Insights from Wuhan
by Yu Liu and Yuanfang Du
Atmosphere 2025, 16(9), 1056; https://doi.org/10.3390/atmos16091056 - 8 Sep 2025
Viewed by 248
Abstract
With the continuous acceleration of industrialization, air pollution has become increasingly severe and has, to some extent, contributed to the progression of global climate change. Against this backdrop, accurate temperature forecasting plays a vital role in various fields, including agricultural production, energy scheduling, [...] Read more.
With the continuous acceleration of industrialization, air pollution has become increasingly severe and has, to some extent, contributed to the progression of global climate change. Against this backdrop, accurate temperature forecasting plays a vital role in various fields, including agricultural production, energy scheduling, environmental governance, and public health protection. To improve the accuracy and stability of temperature prediction, this study proposes a hybrid modeling approach that integrates convolutional neural networks (CNNs), Long Short-Term Memory (LSTM) networks, and random forests (RFs). This model fully leverages the strengths of CNNs in extracting local spatial features, the advantages of LSTM in modeling long-term dependencies in time series, and the capabilities of RF in nonlinear modeling and feature selection through ensemble learning. Based on daily temperature, meteorological, and air pollutant observation data from Wuhan during the period 2015–2023, this study conducted multi-scale modeling and seasonal performance evaluations. Pearson correlation analysis and random forest-based feature importance ranking were used to identify two key pollutants (PM2.5 and O3) and two critical meteorological variables (air pressure and visibility) that are strongly associated with temperature variation. A CNN-LSTM model was then constructed using the meteorological variables as input to generate preliminary predictions. These predictions were subsequently combined with the concentrations of the selected pollutants to form a new feature set, which was input into the RF model for secondary regression, thereby enhancing the overall model performance. The main findings are as follows: (1) The six major pollutants exhibit clear seasonal distribution patterns, with generally higher concentrations in winter and lower in summer, while O3 shows the opposite trend. Moreover, the influence of pollutants on temperature demonstrates significant seasonal heterogeneity. (2) The CNN-LSTM-RF hybrid model shows excellent performance in temperature prediction tasks. The predicted values align closely with observed data in the test set, with a low prediction error (RMSE = 0.88, MAE = 0.66) and a high coefficient of determination (R2 = 0.99), confirming the model’s accuracy and robustness. (3) In multi-scale forecasting, the model performs well on both daily (short-term) and monthly (mid- to long-term) scales. While daily-scale predictions exhibit higher precision, monthly-scale forecasts effectively capture long-term trends. A paired-sample t-test on annual mean temperature predictions across the two time scales revealed a statistically significant difference at the 95% confidence level (t = −3.5299, p = 0.0242), indicating that time granularity has a notable impact on prediction outcomes and should be carefully selected and optimized based on practical application needs. (4) One-way ANOVA and the non-parametric Kruskal–Wallis test were employed to assess the statistical significance of seasonal differences in daily absolute prediction errors. Results showed significant variation across seasons (ANOVA: F = 2.94, p = 0.032; Kruskal–Wallis: H = 8.82, p = 0.031; both p < 0.05), suggesting that seasonal changes considerably affect the model’s predictive performance. Specifically, the model exhibited the highest RMSE and MAE in spring, indicating poorer fit, whereas performance was best in autumn, with the highest R2 value, suggesting a stronger fitting capability. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

23 pages, 1476 KB  
Article
Dynamically Optimized Object Detection Algorithms for Aviation Safety
by Yi Qu, Cheng Wang, Yilei Xiao, Haijuan Ju and Jing Wu
Electronics 2025, 14(17), 3536; https://doi.org/10.3390/electronics14173536 - 4 Sep 2025
Viewed by 341
Abstract
Infrared imaging technology demonstrates significant advantages in aviation safety monitoring due to its exceptional all-weather operational capability and anti-interference characteristics, particularly in scenarios requiring real-time detection of aerial objects such as airport airspace management. However, traditional infrared target detection algorithms face critical challenges [...] Read more.
Infrared imaging technology demonstrates significant advantages in aviation safety monitoring due to its exceptional all-weather operational capability and anti-interference characteristics, particularly in scenarios requiring real-time detection of aerial objects such as airport airspace management. However, traditional infrared target detection algorithms face critical challenges in complex sky backgrounds, including low signal-to-noise ratio (SNR), small target dimensions, and strong background clutter, leading to insufficient detection accuracy and reliability. To address these issues, this paper proposes the AFK-YOLO model based on the YOLO11 framework: it integrates an ADown downsampling module, which utilizes a dual-branch strategy combining average pooling and max pooling to effectively minimize feature information loss during spatial resolution reduction; introduces the KernelWarehouse dynamic convolution approach, which adopts kernel partitioning and a contrastive attention-based cross-layer shared kernel repository to address the challenge of linear parameter growth in conventional dynamic convolution methods; and establishes a feature decoupling pyramid network (FDPN) that replaces static feature pyramids with a dynamic multi-scale fusion architecture, utilizing parallel multi-granularity convolutions and an EMA attention mechanism to achieve adaptive feature enhancement. Experiments demonstrate that the AFK-YOLO model achieves 78.6% mAP on a self-constructed aerial infrared dataset—a 2.4 percentage point improvement over the baseline YOLO11—while meeting real-time requirements for aviation safety monitoring (416.7 FPS), reducing parameters by 6.9%, and compressing weight size by 21.8%. The results demonstrate the effectiveness of dynamic optimization methods in improving the accuracy and robustness of infrared target detection under complex aerial environments, thereby providing reliable technical support for the prevention of mid-air collisions. Full article
(This article belongs to the Special Issue Computer Vision and AI Algorithms for Diverse Scenarios)
Show Figures

Figure 1

38 pages, 848 KB  
Article
Predicting Cybersecurity Incidents via Self-Reported Behavioral and Psychological Indicators: A Stratified Logistic Regression Approach
by László Bognár
J. Cybersecur. Priv. 2025, 5(3), 67; https://doi.org/10.3390/jcp5030067 - 4 Sep 2025
Viewed by 236
Abstract
This study presents a novel and interpretable, deployment-ready framework for predicting cybersecurity incidents through item-level behavioral, cognitive, and dispositional indicators. Based on survey data from 453 professionals across countries and sectors, we developed 72 logistic regression models across twelve self-reported incident outcomes—from account [...] Read more.
This study presents a novel and interpretable, deployment-ready framework for predicting cybersecurity incidents through item-level behavioral, cognitive, and dispositional indicators. Based on survey data from 453 professionals across countries and sectors, we developed 72 logistic regression models across twelve self-reported incident outcomes—from account lockouts to full device compromise—within six analytically stratified layers (Education, IT, Hungary, UK, USA, and full sample). Drawing on five theoretically grounded domains—cybersecurity behavior, digital literacy, personality traits, risk rationalization, and work–life boundary blurring—our models preserve the full granularity of individual responses rather than relying on aggregated scores, offering rare transparency and interpretability for real-world applications. This approach reveals how stratified models, despite smaller sample sizes, often outperform general ones by capturing behavioral and contextual specificity. Moderately prevalent outcomes (e.g., suspicious logins, multiple mild incidents) yielded the most robust predictions, while rare-event models, though occasionally high in “Area Under the Receiver Operating Characteristic Curve” (AUC), suffered from overfitting under cross-validation. Beyond model construction, we introduce threshold calibration and fairness-aware integration of demographic variables, enabling ethically grounded deployment in diverse organizational contexts. By unifying theoretical depth, item-level precision, multilayer stratification, and operational guidance, this study establishes a scalable blueprint for human-centric cybersecurity. It bridges the gap between behavioral science and risk analytics, offering the tools and insights needed to detect, predict, and mitigate user-level threats in increasingly blurred digital environments. Full article
(This article belongs to the Special Issue Cybersecurity Risk Prediction, Assessment and Management)
Show Figures

Figure 1

25 pages, 4707 KB  
Article
Field-Scale Rice Area and Yield Mapping in Sri Lanka with Optical Remote Sensing and Limited Training Data
by Mutlu Özdoğan, Sherrie Wang, Devaki Ghose, Eduardo Fraga, Ana Fernandes and Gonzalo Varela
Remote Sens. 2025, 17(17), 3065; https://doi.org/10.3390/rs17173065 - 3 Sep 2025
Viewed by 755
Abstract
Rice is a staple crop for over half the world’s population, and accurate, timely information on its planted area and production is crucial for food security and agricultural policy, particularly in developing nations like Sri Lanka. However, reliable rice monitoring in regions like [...] Read more.
Rice is a staple crop for over half the world’s population, and accurate, timely information on its planted area and production is crucial for food security and agricultural policy, particularly in developing nations like Sri Lanka. However, reliable rice monitoring in regions like Sri Lanka faces significant challenges due to frequent cloud cover and the fragmented nature of smallholder farms. This research introduces a novel, cost-effective method for mapping rice-planted area and yield at field scales in Sri Lanka using optical satellite data. The rice-planted fields were identified and mapped using a phenologically tuned image classification algorithm that highlights rice presence by observing water occurrence during transplanting and vegetation activity during subsequent crop growth. To estimate yields, a random forest regression model was trained at the district level by incorporating a satellite-derived chlorophyll index and environmental variables and subsequently applied at the field level. The approach has enabled the creation of two decades (2000–2022) of reliable, field-scale rice area and yield estimates, achieving map accuracies between 70% and over 90% and yield estimates with less than 20% error. These highly granular results, which are not available through traditional surveys, show a strong correlation with government statistics. They also demonstrate the advantages of a rule-based, phenology-driven classification over purely statistical machine learning models for long-term consistency in dynamic agricultural environments. This work highlights the significant potential of remote sensing to provide accurate and detailed insights into rice cultivation, supporting policy decisions and enhancing food security in Sri Lanka and other cloud-prone regions. Full article
Show Figures

Figure 1

25 pages, 3134 KB  
Article
Threat Intelligence Extraction Framework (TIEF) for TTP Extraction
by Anooja Joy, Madhav Chandane, Yash Nagare and Faruk Kazi
J. Cybersecur. Priv. 2025, 5(3), 63; https://doi.org/10.3390/jcp5030063 - 3 Sep 2025
Viewed by 380
Abstract
The increasing complexity and scale of cyber threats demand advanced, automated methodologies for extracting actionable cyber threat intelligence (CTI). The automated extraction of Tactics, Techniques, and Procedures (TTPs) from unstructured threat reports remains a challenging task, constrained by the scarcity of labeled data, [...] Read more.
The increasing complexity and scale of cyber threats demand advanced, automated methodologies for extracting actionable cyber threat intelligence (CTI). The automated extraction of Tactics, Techniques, and Procedures (TTPs) from unstructured threat reports remains a challenging task, constrained by the scarcity of labeled data, severe class imbalance, semantic variability, and the complexity of multi-class, multi-label learning for fine-grained classification. To address these challenges, this work proposes the Threat Intelligence Extraction Framework (TIEF) designed to autonomously extract Indicators of Compromise (IOCs) from heterogeneous textual threat reports and represent them by the STIX 2.1 standard for standardized sharing. TIEF employs the DistilBERT Base-Uncased model as its backbone, achieving an F1 score of 0.933 for multi-label TTP classification, while operating with 40% fewer parameters than traditional BERT-base models and preserving 97% of their predictive performance. Distinguishing itself from existing methodologies such as TTPDrill, TTPHunter, and TCENet, TIEF incorporates a multi-label classification scheme capable of covering 560 MITRE ATT&CK classes comprising techniques and sub-techniques, thus facilitating a more granular and semantically precise characterization of adversarial behaviors. BERTopic modeling integration enabled the clustering of semantically similar textual segments and captured the variations in threat report narratives. By operationalizing sub-technique-level discrimination, TIEF contributes to context-aware automated threat detection. Full article
(This article belongs to the Collection Machine Learning and Data Analytics for Cyber Security)
Show Figures

Figure 1

25 pages, 931 KB  
Article
A Trust Score-Based Access Control Model for Zero Trust Architecture: Design, Sensitivity Analysis, and Real-World Performance Evaluation
by Eunsu Jeong and Daeheon Yang
Appl. Sci. 2025, 15(17), 9551; https://doi.org/10.3390/app15179551 - 30 Aug 2025
Viewed by 435
Abstract
As digital infrastructures become increasingly dynamic and complex, traditional static access control mechanisms are no longer sufficient to counter advanced and persistent cyber threats. In response, Zero Trust Architecture (ZTA) emphasizes continuous verification and context-aware access decisions. To realize [...] Read more.
As digital infrastructures become increasingly dynamic and complex, traditional static access control mechanisms are no longer sufficient to counter advanced and persistent cyber threats. In response, Zero Trust Architecture (ZTA) emphasizes continuous verification and context-aware access decisions. To realize these principles in practice, this study introduces a Trust Score (TS)-based access control model as a systematic alternative to legacy, rule-driven approaches that lack adaptability in real-time environments. The proposed TS model quantifies the trustworthiness of users or devices based on four core factors—User Behavior (B), Network Environment (N), Device Status (D), and Threat History (T)—each derived from measurable operational attributes. These factors were carefully structured to reflect real-world Zero Trust environments, and a total of 20 detailed sub-metrics were developed to support their evaluation. This design enables accurate and granular trust assessment using live operational data, allowing for fine-tuned access control decisions aligned with Zero Trust principles. A comprehensive sensitivity analysis was conducted to evaluate the relative impact of each factor under different weight configurations and operational conditions. The results revealed that B and N are most influential in real-time evaluation scenarios, while B and T play a decisive role in triggering adaptive policy responses. This analysis provides a practical basis for designing and optimizing context-aware access control strategies. Empirical evaluations using the UNSW-NB15 dataset confirmed the TS model’s computational efficiency and scalability. Compared to legacy access control approaches, the TS model achieved significantly lower latency and higher throughput with minimal memory usage, validating its suitability for deployment in real-time, resource-constrained Zero Trust environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 4773 KB  
Article
Predicting Constitutive Behaviour of Idealized Granular Soils Using Recurrent Neural Networks
by Xintong Li and Jianfeng Wang
Appl. Sci. 2025, 15(17), 9495; https://doi.org/10.3390/app15179495 - 29 Aug 2025
Viewed by 272
Abstract
The constitutive modelling of granular soils has been a long-standing research subject in geotechnical engineering, and machine learning (ML) has recently emerged as a promising tool for achieving this goal. This paper proposes two recurrent neural networks, namely, the Gated Recurrent Unit Neural [...] Read more.
The constitutive modelling of granular soils has been a long-standing research subject in geotechnical engineering, and machine learning (ML) has recently emerged as a promising tool for achieving this goal. This paper proposes two recurrent neural networks, namely, the Gated Recurrent Unit Neural Network (GRU-NN) and the Long Short-Term Memory Neural Network (LSTM-NN), which utilize input parameters such as the initial void ratio, initial fabric anisotropy, uniformity coefficient, mean particle size, and confining pressure to establish the high-dimensional relationships of granular soils from micro to macro levels subjected to triaxial shearing. The research methodology consists of several steps. Firstly, 200 numerical triaxial tests on idealized granular soils comprising polydisperse spherical particles are performed using the discrete element method (DEM) simulation to generate datasets and to train and test the proposed neural networks. Secondly, LSTM-NN and GRU-NN are constructed and trained, and their prediction performance is evaluated by the mean absolute percentage error (MAPE) and R-square against the DEM-based datasets. The extremely low error values obtained by both LSTM-NN and GRU-NN indicate their outstanding capability in predicting the constitutive behaviour of idealized granular soils. Finally, the trained ML-based models are applied to predict the constitutive behaviour of a miniature glass bead sample subjected to triaxial shearing with in situ micro-CT, as well as to two extrapolated test sets with different initial parameters. The results show that both methods perform well in capturing the mechanical responses of the idealized granular soils. Full article
Show Figures

Figure 1

26 pages, 5545 KB  
Article
An Intelligent Optimization Design Method for Furniture Form Considering Multi-Dimensional User Affective Requirements
by Lei Fu, Xinyan Yang, Ling Zhu and Jiufang Lv
Symmetry 2025, 17(9), 1406; https://doi.org/10.3390/sym17091406 - 29 Aug 2025
Viewed by 499
Abstract
A pervasive cognitive asymmetry exists between designers and users, and contemporary furniture form design often struggles to accommodate and balance multi-dimensional user affective requirements. To address these challenges, this study proposes an intelligent optimization design method for furniture form that enhances the universality [...] Read more.
A pervasive cognitive asymmetry exists between designers and users, and contemporary furniture form design often struggles to accommodate and balance multi-dimensional user affective requirements. To address these challenges, this study proposes an intelligent optimization design method for furniture form that enhances the universality of user research and the balance of design decision-making. First, representative URs are extracted from online user review texts collected through web crawling. These URs are then classified into three-dimensional quality attributes using the refined Kano’s model, thereby identifying the key URs. Second, a decomposition table of furniture design characteristics (DCs) is constructed. Third, the multi-objective red-billed blue magpie optimizer (MORBMO) is employed to automatically generate a Pareto solution set that satisfies the multi-dimensional key URs, from which the final optimal solution is determined. The proposed method improves the objectivity and granularity of user research, assists furniture enterprises in prioritizing product development, and enhances user satisfaction across multiple affective dimensions. Furthermore, it provides enterprises with flexible choices among diverse alternatives, thereby mitigating the asymmetry inherent in furniture form design. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Computer-Aided Industrial Design)
Show Figures

Figure 1

42 pages, 1578 KB  
Article
FirmVulLinker: Leveraging Multi-Dimensional Firmware Profiling for Identifying Homologous Vulnerabilities in Internet of Things Devices
by Yixuan Cheng, Fengzhi Xu, Lei Xu, Yang Ge, Jingyu Yang, Wenqing Fan, Wei Huang and Wen Liu
Electronics 2025, 14(17), 3438; https://doi.org/10.3390/electronics14173438 - 28 Aug 2025
Viewed by 329
Abstract
Identifying homologous vulnerabilities across diverse IoT firmware images is critical for large-scale vulnerability auditing and risk assessment. However, existing approaches often rely on coarse-grained components or single-dimensional metrics, lacking the semantic granularity needed to capture cross-firmware vulnerability relationships. To address this gap, we [...] Read more.
Identifying homologous vulnerabilities across diverse IoT firmware images is critical for large-scale vulnerability auditing and risk assessment. However, existing approaches often rely on coarse-grained components or single-dimensional metrics, lacking the semantic granularity needed to capture cross-firmware vulnerability relationships. To address this gap, we propose FirmVulLinker, a semantic profiling framework that holistically models firmware images across five dimensions: unpacking signature sequences, filesystem semantics, interface exposure, boundary binary symbols, and sensitive parameter call chains. These multi-dimensional profiles enable interpretable similarity analysis without requiring prior vulnerability labels. We construct an evaluation dataset comprising 54 Known Defective Firmware (KDF) images with 74 verified vulnerabilities and assess FirmVulLinker across multiple correlation tasks. Compared to state-of-the-art techniques, FirmVulLinker achieves higher precision with substantially lower false-positive and false-negative rates. Notably, it identifies and reproduces 53 previously undisclosed N-day vulnerabilities in firmware images not listed as affected at the time of public disclosure, effectively extending the known impact scope. Our results demonstrate that FirmVulLinker enables scalable, high-fidelity homologous vulnerability analysis, offering a new perspective on understanding cross-firmware vulnerability patterns in the IoT ecosystem. Full article
Show Figures

Figure 1

12 pages, 1667 KB  
Proceeding Paper
Multivariate Forecasting Evaluation: Nixtla-TimeGPT
by S M Ahasanul Karim, Bahram Zarrin and Niels Buus Lassen
Comput. Sci. Math. Forum 2025, 11(1), 29; https://doi.org/10.3390/cmsf2025011029 - 26 Aug 2025
Viewed by 55
Abstract
Generative models are being used in all domains. While primarily built for processing texts and images, their reach has been further extended towards data-driven forecasting. Whereas there are many statistical, machine learning and deep learning models for predictive forecasting, generative models are special [...] Read more.
Generative models are being used in all domains. While primarily built for processing texts and images, their reach has been further extended towards data-driven forecasting. Whereas there are many statistical, machine learning and deep learning models for predictive forecasting, generative models are special because they do not need to be trained beforehand, saving time and computational power. Also, multivariate forecasting with the existing models is difficult when the future horizons are unknown for the regressors because they add mode uncertainties in the forecasting process. Thus, this study experiments with TimeGPT(Zeroshot) by Nixtla where it tries to identify if the generative model can outperform other models like ARIMA, Prophet, NeuralProphet, Linear Regression, XGBoost, Random Forest, LSTM, and RNN. To determine this, the research created synthetic datasets and synthetic signals to assess the individual model performances and regressor performances for 12 models. The results then used the findings to assess the performance of TimeGPT in comparison to the best fitting models in different real-world scenarios. The results showed that TimeGPT outperforms multivariate forecasting for weekly granularities by automatically selecting important regressors whereas its performance for daily and monthly granularities is still weak. Full article
Show Figures

Figure 1

25 pages, 5957 KB  
Article
Benchmarking IoT Simulation Frameworks for Edge–Fog–Cloud Architectures: A Comparative and Experimental Study
by Fatima Bendaouch, Hayat Zaydi, Safae Merzouk and Saliha Assoul
Future Internet 2025, 17(9), 382; https://doi.org/10.3390/fi17090382 - 26 Aug 2025
Viewed by 599
Abstract
Current IoT systems are structured around Edge, Fog, and Cloud layers to manage data and resource constraints more effectively. Although several studies have examined IoT simulators from a functional angle, few have combined technical comparisons with experimental validation under realistic conditions. This lack [...] Read more.
Current IoT systems are structured around Edge, Fog, and Cloud layers to manage data and resource constraints more effectively. Although several studies have examined IoT simulators from a functional angle, few have combined technical comparisons with experimental validation under realistic conditions. This lack of integration limits the practical value of prior results and complicates tool selection for distributed architectures. This work introduces a selection and evaluation methodology for simulators that explicitly represent the Edge–Fog–Cloud continuum. Thirteen open-source tools are analyzed based on functional, technical, and operational features. Among them, iFogSim2 and FogNetSim++ are selected for a detailed experimental comparison on their support of mobility, resource allocation, and energy modeling across all layers. A shared hybrid IoT scenario is simulated using eight key metrics: execution time, application loop delay, CPU processing time per tuple, energy consumption, cloud execution cost, network usage, scalability, and robustness. The analysis reveals distinct modeling strategies: FogNetSim++ reduces loop latency by 48% and maintains stable performance at scale but shows high data loss under overload. In contrast, iFogSim2 consumes up to 80% less energy and preserves message continuity in stressful conditions, albeit with longer execution times. These outcomes reflect the trade-offs between modeling granularity, performance stability, and system resilience. Full article
Show Figures

Figure 1

21 pages, 6790 KB  
Article
MGFormer: Super-Resolution Reconstruction of Retinal OCT Images Based on a Multi-Granularity Transformer
by Jingmin Luan, Zhe Jiao, Yutian Li, Yanru Si, Jian Liu, Yao Yu, Dongni Yang, Jia Sun, Zehao Wei and Zhenhe Ma
Photonics 2025, 12(9), 850; https://doi.org/10.3390/photonics12090850 - 25 Aug 2025
Viewed by 347
Abstract
Optical coherence tomography (OCT) acquisitions often reduce lateral sampling density to shorten scan time and suppress motion artifacts, but this strategy degrades the signal-to-noise ratio and obscures fine retinal microstructures. To recover these details without hardware modifications, we propose MGFormer, a lightweight Transformer [...] Read more.
Optical coherence tomography (OCT) acquisitions often reduce lateral sampling density to shorten scan time and suppress motion artifacts, but this strategy degrades the signal-to-noise ratio and obscures fine retinal microstructures. To recover these details without hardware modifications, we propose MGFormer, a lightweight Transformer for OCT super-resolution (SR) that integrates a multi-granularity attention mechanism with tensor distillation. A feature-enhancing convolution first sharpens edges; stacked multi-granularity attention blocks then fuse coarse-to-fine context, while a row-wise top-k operator retains the most informative tokens and preserves their positional order. We trained and evaluated MGFormer on B-scans from the Duke SD-OCT dataset at 2×, 4×, and 8× scaling factors. Relative to seven recent CNN- and Transformer-based SR models, MGFormer achieves the highest quantitative fidelity; at 4× it reaches 34.39 dB PSNR and 0.8399 SSIM, surpassing SwinIR by +0.52 dB and +0.026 SSIM, and reduces LPIPS by 21.4%. Compared with the same backbone without tensor distillation, FLOPs drop from 289G to 233G (−19.4%), and per-B-scan latency at 4× falls from 166.43 ms to 98.17 ms (−41.01%); the model size remains compact (105.68 MB). A blinded reader study shows higher scores for boundary sharpness (4.2 ± 0.3), pathology discernibility (4.1 ± 0.3), and diagnostic confidence (4.3 ± 0.2), exceeding SwinIR by 0.3–0.5 points. These results suggest that MGFormer can provide fast, high-fidelity OCT SR suitable for routine clinical workflows. Full article
(This article belongs to the Section Biophotonics and Biomedical Optics)
Show Figures

Figure 1

15 pages, 3154 KB  
Article
Transformer-Based HER2 Scoring in Breast Cancer: Comparative Performance of a Foundation and a Lightweight Model
by Yeh-Han Wang, Min-Hsiang Chang, Hsin-Hsiu Tsai, Chun-Jui Chien and Jian-Chiao Wang
Diagnostics 2025, 15(17), 2131; https://doi.org/10.3390/diagnostics15172131 - 23 Aug 2025
Viewed by 415
Abstract
Background/Objectives: Human epidermal growth factor 2 (HER2) scoring is critical for modern breast cancer therapies, especially with emerging indications of antibody–drug conjugates for HER2-low tumors. However, inter-observer agreement remains limited in borderline cases. Automatic artificial intelligence-based scoring has the [...] Read more.
Background/Objectives: Human epidermal growth factor 2 (HER2) scoring is critical for modern breast cancer therapies, especially with emerging indications of antibody–drug conjugates for HER2-low tumors. However, inter-observer agreement remains limited in borderline cases. Automatic artificial intelligence-based scoring has the potential to improve diagnostic consistency and scalability. This study aimed to develop two transformer-based models for HER2 scoring of breast cancer whole-slide images (WSIs) and compare their performance. Methods: We adapted a large-scale foundation model (Virchow) and a lightweight model (TinyViT). Both were trained using patch-level annotations and integrated into a WSI scoring pipeline. Performance was evaluated on a clinical test set (n = 66), including clinical decision tasks and inference efficiency. Results: Both models achieved substantial agreement with pathologist reports (linear weighted kappa: 0.860 for Virchow, 0.825 for TinyViT). Virchow showed slightly higher WSI-level accuracy than TinyViT, whereas TinyViT reduced inference times by 60%. In three binary clinical tasks, both models demonstrated a diagnostic performance comparable to pathologists, particularly in identifying HER2-low tumors for antibody–drug conjugate (ADC) therapy. A continuous scoring framework demonstrated a strong correlation between the two models (Pearson’s r = 0.995) and aligned with human assessments. Conclusions: Both transformer-based artificial intelligence models achieved human-level accuracy for automated HER2 scoring with interpretable outputs. While the foundation model offers marginally higher accuracy, the lightweight model provides practical advantages for clinical deployment. In addition, continuous scoring may provide a more granular HER2 quantification, especially in borderline cases. This could support a new interpretive paradigm for HER2 assessment aligned with the evolving indications of ADC. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Graphical abstract

7 pages, 752 KB  
Proceeding Paper
Usage of OLAP Cubes as a Data Model for DSS
by Nikolai Scerbakov, Alexander Schukin and Eugenia Rezedinova
Eng. Proc. 2025, 104(1), 4; https://doi.org/10.3390/engproc2025104004 - 22 Aug 2025
Viewed by 235
Abstract
A decision support system (DSS) is a software application designed to determine suitable actions for specific organizational situations. Its main component is a data repository analyzed to produce decisions. This paper describes the data organization (Data Model) as a multi-dimensional OLAP cube with [...] Read more.
A decision support system (DSS) is a software application designed to determine suitable actions for specific organizational situations. Its main component is a data repository analyzed to produce decisions. This paper describes the data organization (Data Model) as a multi-dimensional OLAP cube with amendments for decision-making support. We present DSS functionality as building (slicing) hyper-cubes into decision sub-cubes. The system’s adjustment and evolution involve changing the granularity of these sub-cubes. We discuss the merging and splitting of hyper-cubes, arguing that this functionality is adequate for creating complex, real-time DSSs for various incidents, such as cybersecurity incidents. Full article
Show Figures

Figure 1

Back to TopTop