Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (239)

Search Parameters:
Keywords = white-box models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 31622 KB  
Article
The Influence of Surface Roughness on GIS-Based Solar Radiation Modelling
by Renata Ďuračiová, Tomáš Ič and Tomasz Oberski
ISPRS Int. J. Geo-Inf. 2026, 15(4), 155; https://doi.org/10.3390/ijgi15040155 - 3 Apr 2026
Viewed by 221
Abstract
While parameters such as slope and aspect are routinely considered in solar radiation modelling, the role of terrain or surface roughness remains underexplored, with no universally accepted method for its calculation. This study compares several approaches to quantifying terrain or surface roughness in [...] Read more.
While parameters such as slope and aspect are routinely considered in solar radiation modelling, the role of terrain or surface roughness remains underexplored, with no universally accepted method for its calculation. This study compares several approaches to quantifying terrain or surface roughness in several geographical information system (GIS) environments (ArcGIS, QGIS, WhiteboxTools, and SAGA GIS) and introduces local fractal dimension, computed using a custom Python script, as an additional metric. The aim is to evaluate the influence of surface roughness on potential solar radiation modelling and to examine its relationship with other terrain parameters. The analysis is based on case studies from both a rugged alpine environment in the Tatra Mountains (Tichá and Kôprová dolina (valleys), Kriváň peak; 944–2467 m a.s.l.) and an urban environment (the city of Poprad, near the High Tatras, Slovakia). The results demonstrate that surface roughness can significantly affect potential solar radiation modelling in areas with high surface variability. The findings are applicable not only to solar radiation studies, but also to other fields of spatial modelling, where incorporating surface roughness can improve the accuracy and robustness of spatial analyses and predictions. Full article
Show Figures

Figure 1

20 pages, 1454 KB  
Article
Momentum-Based Adversarial Attacks and Multi-Level Denoising Defenses in Deep Learning-Based Wind Power Forecasting
by Yangming Min, Congmei Jiang, Kang Yang, Xiankui Wen and Kexin Chen
Sensors 2026, 26(7), 2073; https://doi.org/10.3390/s26072073 - 26 Mar 2026
Viewed by 422
Abstract
Deep learning (DL) techniques have significantly advanced wind power forecasting by enhancing accuracy. However, these DL models are vulnerable to adversarial attacks, which can lead to severely inaccurate forecasts. Existing studies in wind power forecasting have rarely addressed the stealthiness and effectiveness of [...] Read more.
Deep learning (DL) techniques have significantly advanced wind power forecasting by enhancing accuracy. However, these DL models are vulnerable to adversarial attacks, which can lead to severely inaccurate forecasts. Existing studies in wind power forecasting have rarely addressed the stealthiness and effectiveness of adversarial attacks simultaneously, nor have they investigated defense strategies against multiple perturbation strengths or in black-box scenarios. To this end, we propose an attack algorithm for wind power forecasting, i.e., the momentum iterative fast gradient sign method (MI-FGSM). This algorithm generates adversarial samples by incorporating momentum into the iterative process and adding perturbations to the input samples along the gradient direction. To defend against such attacks under varying perturbation strengths, a defense model called multi-level iterative denoising autoencoder (MLI-DAE) is proposed. MLI-DAE is trained using adversarial samples with multiple perturbation levels to effectively restore attacked inputs to their clean forms. Experimental results under both white-box and black-box scenarios demonstrate that MI-FGSM induces significantly larger forecast errors with smaller perturbation magnitudes compared to FGSM. Furthermore, our proposed MLI-DAE effectively defends against multi-level perturbations without compromising the original forecast accuracy. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

22 pages, 31045 KB  
Article
Robust and Stealthy White-Box Watermarking for Intellectual Property Protection of Remote Sensing Object Detection Models
by Lingjun Zou, Xin Xu, Weitong Chen, Qingqing Hong and Di Wu
Remote Sens. 2026, 18(7), 985; https://doi.org/10.3390/rs18070985 - 25 Mar 2026
Viewed by 287
Abstract
Remote sensing object detection (RSOD) models play an increasingly important role in modern remote sensing systems. However, during model delivery, sharing, and deployment, RSOD models face increasing risks of unauthorized redistribution, illegal replication, and intellectual property infringement. To mitigate these threats, this paper [...] Read more.
Remote sensing object detection (RSOD) models play an increasingly important role in modern remote sensing systems. However, during model delivery, sharing, and deployment, RSOD models face increasing risks of unauthorized redistribution, illegal replication, and intellectual property infringement. To mitigate these threats, this paper proposes a white-box watermarking framework for RSOD models that enables reliable copyright verification while preserving the performance of the primary detection task. Specifically, a gradient-based sensitivity analysis of the detection loss is first performed to adaptively identify model parameters that minimally affect detection performance, which are then selected as watermark carriers. Subsequently, a parameter-ranking-based watermark encoding scheme is developed, where watermark bits are embedded by enforcing relative ordering constraints between parameter pairs. To further improve robustness under practical deployment conditions, an attack-simulation-driven training strategy is introduced, in which common perturbations and watermark removal attacks are simulated during the embedding process. In addition, a stealthiness enhancement strategy based on statistical distribution constraints is designed to maintain consistency between the distribution of watermarked parameters and those of the original model, thereby reducing the risk of watermark exposure and localization. Extensive experiments across multiple RSOD datasets and detection architectures demonstrate that the proposed method achieves a high copyright verification success rate with negligible impact on detection accuracy and exhibits strong robustness and stealthiness against a variety of watermark removal attacks. Full article
Show Figures

Figure 1

18 pages, 2375 KB  
Article
Beyond the Black Box: An Interpretable Saliency Framework for Abstract Art via Theory-Driven Heuristics
by Evaldas Vaičekauskas and Vytautas Abromavičius
Appl. Sci. 2026, 16(7), 3145; https://doi.org/10.3390/app16073145 - 24 Mar 2026
Viewed by 149
Abstract
Visual saliency modeling has achieved high predictive performance in natural image domains, yet its generalization to abstract art remains limited by the lack of explicit semantic structure and the scarcity of eye-tracking data. In such semantically ambiguous contexts, understanding the underlying drivers of [...] Read more.
Visual saliency modeling has achieved high predictive performance in natural image domains, yet its generalization to abstract art remains limited by the lack of explicit semantic structure and the scarcity of eye-tracking data. In such semantically ambiguous contexts, understanding the underlying drivers of attention is as critical as predictive accuracy. This paper presents an interpretable, ’white-box’ saliency framework tailored to abstract art, which constructs predictions through a weighted combination of 35 modular heuristics grounded in perceptual psychology and art theory, including contrast, grouping, isolation and symmetry. Heuristic weights are optimized via a genetic algorithm and refined by a context-aware modulation mechanism that adapts to image-level visual features. Evaluation against eye-tracking data from 40 abstract paintings demonstrates that the model with the expanded activation variant produces stable, meaningful predictions while achieving a competitive KL-divergence score (1.11 ± 0.55), which is comparable to the SalGAN baseline (1.11 ± 0.53). Analysis of the optimized weights reveals strong contributions from contrast, texture, and grouping mechanisms, while nearly half of the heuristics, including most horizontal symmetry heuristics are systematically pruned by the model. Moreover, context-aware modulation reveals that these weights are not static but shift dynamically based on image-level features such as edge density and intensity variation. By prioritizing transparency over raw predictive performance, this study demonstrates that explainable saliency models can function as robust investigative tools for decoding the principles of human visual perception in data-scarce domains. Full article
(This article belongs to the Special Issue Explainable Machine Learning and Computer Vision)
Show Figures

Figure 1

28 pages, 7144 KB  
Article
Optimization of an MPC Controller Based on a Hybrid Cooling Load Prediction Model and Experimental Validation in HVAC Systems
by Shen Zhang, Xuelian Lei, Xiaofang Shan, Ting Li and Wenyu Wu
Buildings 2026, 16(6), 1269; https://doi.org/10.3390/buildings16061269 - 23 Mar 2026
Viewed by 218
Abstract
The high energy intensity of public buildings, especially those with HVAC systems, calls for advanced control strategies such as Model Predictive Control (MPC) to balance energy efficiency and thermal comfort. However, the performance of MPC relies critically on the accuracy and robustness of [...] Read more.
The high energy intensity of public buildings, especially those with HVAC systems, calls for advanced control strategies such as Model Predictive Control (MPC) to balance energy efficiency and thermal comfort. However, the performance of MPC relies critically on the accuracy and robustness of building cooling and heating load calculations, which remain challenging, particularly for buildings with complex dynamic characteristics. This study proposes a simplified modeling-based MPC approach and investigates the influence of three different load calculation methods on controller performance: a physics-driven white-box model, a data-driven black-box model, and a novel Closed-Loop Load Grey Model (CLLGM). Under identical outdoor conditions during summer cooling operation, the three controllers exhibit distinct performance disparities: although the proposed CLLGM-based controller only reduces the load prediction MAPE by 0.63% compared with the black-box model, it improves the temperature control stability index (TDI) by 80.43% and increases the comprehensive score from the MPC multi-objective optimization function by 16.55%. Its key advantage is that it can use on-site temperature measurements as feedback to correct the cooling load, making it better suited for simulation and computation in MPC. Full article
Show Figures

Graphical abstract

27 pages, 4244 KB  
Article
Low-Voltage Blood Component Separation for Implantable Kidneys Using a Sawtooth Electrode and Negative Dielectrophoresis
by Hasan Mhd Nazha, Mhd Ayham Darwich, Al-Hasan Ali and Basem Ammar
Appl. Sci. 2026, 16(6), 2785; https://doi.org/10.3390/app16062785 - 13 Mar 2026
Viewed by 312
Abstract
Implantable artificial kidneys represent a promising alternative for patients with end-stage renal disease (ESRD), aiming to overcome the limitations of conventional dialysis through the integration of microfluidic and electrokinetic technologies. In this study, we present a sawtooth electrode microfluidic chamber that achieves blood [...] Read more.
Implantable artificial kidneys represent a promising alternative for patients with end-stage renal disease (ESRD), aiming to overcome the limitations of conventional dialysis through the integration of microfluidic and electrokinetic technologies. In this study, we present a sawtooth electrode microfluidic chamber that achieves blood cell separation via negative dielectrophoresis at a record-low operating voltage of 1.4 V, representing a fivefold reduction compared with rectangular electrode designs and supporting potential integration into implantable artificial kidney systems. A microfluidic chip incorporating an asymmetric sawtooth electrode geometry was developed to enhance local electric field gradients while reducing power consumption. Device performance was investigated using COMSOL Multiphysics simulations. Response Surface Methodology (RSM) based on a Box–Behnken design was employed to optimize the number of teeth per unit length (N), sawtooth height (H), and applied voltage (V), while excitation frequency was fixed at 1 MHz and flow velocity was maintained constant at 0.1 µL·min−1. Statistical analysis was conducted using analysis of variance (ANOVA) in Minitab (Version 27; Minitab, LLC, State College, PA, USA, 2024). The optimization model showed strong predictive capability (R2 = 95.8%) and identified applied voltage (59.45% contribution) and sawtooth height (33%) as the dominant factors affecting separation efficiency, with a significant H × V interaction (p = 0.023). Comprehensive voltage-response mapping over the range of 0.8–4.0 V revealed four operational regimes, including a previously unreported high-voltage failure zone above 2.8 V, where electrothermal flow and electroporation degrade performance. Under physiological conductivity conditions, the optimized design maintained a separation efficiency of 78.3% at 1.4 V with a tip temperature rise of only 1.2 °C, while full recovery of performance was achieved at 2.2 V. Cell-specific separation efficiencies reached 97.3% for white blood cells, 95.8% for red blood cells, and 84.7% for platelets, reducing the downstream cellular load by 92.6%. These findings demonstrate that the proposed low-voltage, high-efficiency separation platform has strong potential as a cellular pre-filtration module in implantable artificial kidney systems and other lab-on-chip biomedical devices. Full article
(This article belongs to the Special Issue Advances in Materials for Biosensing and Biomedical Applications)
Show Figures

Figure 1

40 pages, 5583 KB  
Article
Traceable Time-Domain Photovoltaic Module Modeling with Plane-of-Array Irradiance and Solar Geometry Coupling: White-Box Simulink Implementation and Experimental Validation
by Ciprian Popa, Florențiu Deliu, Adrian Popa, Narcis Octavian Volintiru, Andrei Darius Deliu, Iancu Ciocioi and Petrică Popov
Energies 2026, 19(6), 1437; https://doi.org/10.3390/en19061437 - 12 Mar 2026
Viewed by 267
Abstract
Accurate time-domain photovoltaic (PV) models are needed to evaluate performance under outdoor variability beyond STC datasheet conditions. This paper presents a traceable modeling workflow based on the standard single-diode formulation, implemented in MATLAB/Simulink (R2023a) as a modular white-box architecture that explicitly resolves photocurrent [...] Read more.
Accurate time-domain photovoltaic (PV) models are needed to evaluate performance under outdoor variability beyond STC datasheet conditions. This paper presents a traceable modeling workflow based on the standard single-diode formulation, implemented in MATLAB/Simulink (R2023a) as a modular white-box architecture that explicitly resolves photocurrent generation and loss mechanisms (diode recombination, shunt leakage, and series resistance effects) with temperature-consistent propagation through VT(T) and saturation-current terms. The method couples optical boundary conditions to the electrical model by embedding plane-of-array (POA) excitation via the incidence angle θ(t) and roof albedo directly into the photocurrent source term, preserving the causal chain from mounting geometry to electrical response. Calibration is separated from prediction by initializing key parameters using the standard Simulink PV block and then freezing them for time-domain evaluation. The workflow is validated on a 395 W rooftop prototype using 1 min resolved POA irradiance (ISO 9060:2018 Class A radiometric chain) and module temperature (IEC 60751 Class A Pt100), synchronized with electrical measurements. Over a multi-week campaign, the model exhibits high fidelity, with a worst-case relative current error of ~1.1% and a consistently low bias and dispersion, quantified by ME, MAE, RMSE, σe, and thresholded MAPE. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

19 pages, 573 KB  
Article
Bitcoin Market Efficiency Analysis Pre- and Post-COVID-19 Pandemic: An Interrupted Time Series and ARIMAX Approach
by Tendai Makoni, Providence Mushori and Delson Chikobvu
Economies 2026, 14(3), 90; https://doi.org/10.3390/economies14030090 - 11 Mar 2026
Viewed by 474
Abstract
The COVID-19 pandemic constitutes one of the most significant exogenous shocks to global financial markets in recent history, raising questions about the robustness of market efficiency under extreme uncertainty. This study examines whether the pandemic affected the weak-form efficiency of the Bitcoin market [...] Read more.
The COVID-19 pandemic constitutes one of the most significant exogenous shocks to global financial markets in recent history, raising questions about the robustness of market efficiency under extreme uncertainty. This study examines whether the pandemic affected the weak-form efficiency of the Bitcoin market or merely heightened volatility without introducing return predictability. Using daily Bitcoin log returns from January 2013 to February 2026, the analysis first evaluates weak-form market efficiency through the Variance Ratio (VR) test. The VR statistics remain close to unity across multiple holding horizons, and the null hypothesis of a random walk cannot be rejected, indicating that daily Bitcoin returns are consistent with weak-form efficiency. Building on this baseline, an Interrupted Time Series (ITS) framework is employed to assess whether the onset of the COVID-19 pandemic in March 2020 led to structural changes in Bitcoin return dynamics. The ITS results reveal no statistically significant changes in level or slope following the outbreak. To further account for autoregressive and moving-average dynamics while explicitly modelling the intervention, an ARIMAX (0, 0, 7) model with COVID-19 intervention variables is estimated. Both the pandemic dummy and its interaction term are statistically insignificant, indicating no material change in the return-generating process after controlling for serial dependence. The moving-average structure indicates that shocks dissipate over approximately one trading week, consistent with weekly trading cycles and liquidity patterns in cryptocurrency markets rather than persistent return predictability. Diagnostic checks, including the Ljung–Box and Shapiro–Wilk tests, confirm the absence of residual autocorrelation and support the model’s white-noise properties. Although volatility increased during the pandemic period, daily Bitcoin returns continued to align with weak-form market efficiency. The evidence, therefore, suggests that COVID-19 served as a stressor without generating persistent inefficiencies. These findings reinforce the distinction between volatility and predictability, demonstrating that heightened uncertainty does not necessarily undermine informational efficiency. Full article
Show Figures

Figure 1

14 pages, 4429 KB  
Article
Reading Urban Heritage Through Roofscapes: A Machine Learning Approach for Tirilye
by İdris Can Irız, Server Funda Kerestecioğlu and Ilker Karadag
Land 2026, 15(3), 437; https://doi.org/10.3390/land15030437 - 10 Mar 2026
Viewed by 378
Abstract
Historic towns often lack thorough records, complicating the study of long-term material changes in the built environment. This study develops RoofChronoNet, a machine learning workflow that extracts roof covering classes from grayscale imagery and quantifies roofscape change over time. Applied to Tirilye (Bursa, [...] Read more.
Historic towns often lack thorough records, complicating the study of long-term material changes in the built environment. This study develops RoofChronoNet, a machine learning workflow that extracts roof covering classes from grayscale imagery and quantifies roofscape change over time. Applied to Tirilye (Bursa, Turkey), historical aerial photographs from 1970 and 1984 are colourised using a pix2pix generative adversarial network trained on 2022 imagery. A YOLOv11m-seg model then detects roof surfaces and classifies them into three roof covering categories: red, white, and dark grey, producing diachronic roofscape maps for 1970–2022. Bounding box detection reached mask mAP@0.50 of 0.81 (2022), ≈0.71 (1984), and 0.76 (1970, single class), while class-averaged mask mAP@0.50 was lower due to pixel-level delineation complexity. Results indicate the persistence of red-tiled roof regimes within the historic core alongside a growing presence of white and dark-grey roof coverings in peripheral areas, consistent with renovation-driven material diffusion after the 1980s. Methodologically, the study contributes a reproducible framework that operationalises chromatic differentiation as a measurable variable for mapping roof covering regimes in planning history research using monochrome historical aerial imagery. RoofChronoNet supports heritage-oriented and planning history interpretations of material regime shifts in data-scarce contexts; however, colourised outputs are synthetic and probabilistic, and spatial inferences should be corroborated with archival or field-based evidence where feasible. Full article
Show Figures

Figure 1

31 pages, 2326 KB  
Article
A Logic-Guided and Explainable Approach to LLM-Based Unit Test Generation
by Cong Zeng, Meng Li, Fei Liu, Xiaohua Yang, Jie Liu and Shiyu Yan
Appl. Sci. 2026, 16(5), 2542; https://doi.org/10.3390/app16052542 - 6 Mar 2026
Viewed by 561
Abstract
Large Language Models (LLMs) have demonstrated considerable potential in automated unit test generation; however, most existing approaches rely on a black-box paradigm that directly maps code under test to test code, often resulting in low compilation success rates, limited branch coverage, high assertion [...] Read more.
Large Language Models (LLMs) have demonstrated considerable potential in automated unit test generation; however, most existing approaches rely on a black-box paradigm that directly maps code under test to test code, often resulting in low compilation success rates, limited branch coverage, high assertion failure rates, and poor interpretability. Inspired by the human process of developing test cases, this paper proposes Logic-CoT, a white-box generation paradigm that follows a code under test–logical reasoning–test code workflow. The proposed approach consists of three stages: in the logical inference stage, logical node state vectors and execution paths are constructed from the control flow graph of the code under test, and input values and oracles satisfying state constraints are derived; in the test case construction stage, a template-based method is used to initialize test code conforming to the Arrange–Act–Assert pattern, with test intentions explicitly documented as comments; in the repair stage, syntactic errors and assertion failures are handled in a layered manner, where the former are corrected without altering test logic and the latter trigger logic reflection based on discrepancies between expected and actual outcomes, leading to state updates and test case reconstruction. This design forms a closed-loop process of reasoning, generation, and repair. Experiments on the QuixBugs, Apache Commons, HumanEval, and SV-COMP benchmarks show that Logic-CoT consistently outperforms state-of-the-art approaches such as ChatUniTest in terms of compilation success rate, runtime pass rate, assertion pass rate, branch coverage, average repair iterations for faulty code, and interpretability. Ablation studies further demonstrate that each component of Logic-CoT contributes effectively to improving the overall quality and effectiveness of generated test cases. These results indicate that Logic-CoT improves the reliability and interpretability of LLM-generated unit tests in practical software testing scenarios. Full article
(This article belongs to the Special Issue Advancements in Computer Systems and Software Testing)
Show Figures

Figure 1

20 pages, 3325 KB  
Review
Intelligent Monitoring and Early Warning Diagnosis Technology for Ethylene Cracking Furnace Tubes: A Review of Current Status and Future Prospects
by Jia-Kuan Ren, Xiu-Qing Xu, Zhi-Hong Li, Peng Wang, Guang-Li Zhang, Li-Juan Zhu, Zhen-Quan Bai and Fang-Wei Luo
Processes 2026, 14(5), 811; https://doi.org/10.3390/pr14050811 - 2 Mar 2026
Viewed by 317
Abstract
As the “flagship” unit of the petrochemical industry, the operational status of ethylene cracking furnaces directly impacts the stability and efficiency of the entire production chain. During long-term operation under extreme temperatures and complex reaction environments, cracking furnace tubes face core bottlenecks primarily [...] Read more.
As the “flagship” unit of the petrochemical industry, the operational status of ethylene cracking furnaces directly impacts the stability and efficiency of the entire production chain. During long-term operation under extreme temperatures and complex reaction environments, cracking furnace tubes face core bottlenecks primarily related to thermal and coking effects, such as coke deposition, tube metal overheating, and associated creep damage, which restrict the long-term, safe, and efficient operation of the unit. This paper systematically reviews the key technologies for condition monitoring of cracking furnace tubes, providing an in-depth analysis of various monitoring methods—from traditional infrared thermometry and acoustic emission to emerging optical fiber sensing—covering their working principles, application status, and inherent limitations. Furthermore, it elaborates on the evolution from mechanism-based “white-box” models to data-driven “black-box” models, and further to “gray-box” intelligent diagnostic models that integrate expert knowledge. Industrial application cases of integrated monitoring and diagnostic systems are also introduced. Finally, the paper critically addresses the current severe challenges in data fusion, model generalization, real-time performance, and cost-effectiveness, while outlining future development trends toward digital twins, cross-modal fusion, edge intelligence, and self-evolving systems. The aim is to provide valuable references for technological innovation and engineering applications in this field. Full article
Show Figures

Figure 1

30 pages, 716 KB  
Article
Spectral Robustness Mixer: Cross-Scale Neck for Robust No-Reference Image Quality Assessment
by Bader Rasheed, Anastasia Antsiferova and Dmitriy Vatolin
Technologies 2026, 14(3), 145; https://doi.org/10.3390/technologies14030145 - 28 Feb 2026
Viewed by 262
Abstract
No-reference image quality assessment (NR-IQA) models achieve high correlation with human mean opinion scores (MOS) on clean benchmarks, yet recent work shows they can be highly vulnerable to small adversarial perturbations that severely degrade ranking consistency, including in black-box settings. We introduce the [...] Read more.
No-reference image quality assessment (NR-IQA) models achieve high correlation with human mean opinion scores (MOS) on clean benchmarks, yet recent work shows they can be highly vulnerable to small adversarial perturbations that severely degrade ranking consistency, including in black-box settings. We introduce the Spectral Robustness Mixer (SRM), a lightweight neck inserted between an NR-IQA backbone and regression head, designed to reduce adversarial sensitivity without changing the dataset, label format, or target metric. SRM couples (i) deep-to-shallow cross-scale fusion via a Nyström low-rank attention surrogate, (ii) ridge-conditioned landmark kernels with ridge regularization, solved via numerically stable small-matrix factorization (SVD/LU) to improve conditioning, and (iii) variance-aware entropy-regularized fusion gates with a bounded gain cap to limit gradient amplification. We evaluate SRM on TID2013 and KonIQ-10k under a white-box l/l2 attack ensemble that includes per-image regression objectives and a correlation-aware pairwise inversion objective (a ranking-inspired surrogate for correlation inversion), with expectation-over-transformation (EOT) and anti-gradient masking checks. At ϵ=4/255 (l), SRM improves worst-case robust Spearman’s rank-order correlation coefficient (SROCC; defined as the minimum over our fixed attack ensemble) by an absolute 0.060.08 SROCC points (i.e., correlation-coefficient units, not percentage gain) across datasets/backbones, while keeping clean SROCC within 0.000.01 of the baseline. We observe similar trends for Pearson linear correlation coefficient (PLCC). Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

34 pages, 13605 KB  
Article
BUM: Bayesian Uncertainty Minimization for Transferable Adversarial Examples in SAR Recognition
by Hongqiang Wang, Yuqing Lan, Fuzhan Yue, Zhenghuan Xia and Tao Zhang
Remote Sens. 2026, 18(5), 693; https://doi.org/10.3390/rs18050693 - 26 Feb 2026
Viewed by 293
Abstract
Adversarial examples pose a significant threat to Deep Neural Networks (DNNs) underpinning Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems, as these models exhibit acute susceptibility to such malicious inputs. While white-box attacks achieve high success rates, their transferability to unknown black-box [...] Read more.
Adversarial examples pose a significant threat to Deep Neural Networks (DNNs) underpinning Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR) systems, as these models exhibit acute susceptibility to such malicious inputs. While white-box attacks achieve high success rates, their transferability to unknown black-box models—particularly across different network architectures (e.g., from CNNs to Vision Transformers)—remains a significant challenge. Existing gradient-based iterative methods often overfit the specific decision boundary of the surrogate model, resulting in poor generalization. To address this, we propose a novel generative attack framework termed BUM. Instead of merely maximizing the classification error, BUM explicitly models and minimizes the epistemic uncertainty of the surrogate model. By leveraging Monte Carlo (MC) Dropout to simulate a Bayesian ensemble, we train a generator to craft perturbations that are consistently adversarial across stochastic sub-models. This regularization forces the attack to target high-level, structure-aware semantic features shared among architectures, rather than low-level, model-specific artifacts. Extensive experiments on the MSTAR and FUSAR datasets demonstrate the superior black-box transferability of BUM. Full article
Show Figures

Figure 1

21 pages, 3921 KB  
Article
Adversarial Example Generation Method Based on Wavelet Transform
by Meng Bi, Xiaoguo Liang, Baiyu Wang, Longxin Liu, Xin Yin and Jiafeng Liu
Information 2026, 17(2), 182; https://doi.org/10.3390/info17020182 - 10 Feb 2026
Viewed by 458
Abstract
Adversarial examples are crucial tools for assessing the robustness of deep neural networks (DNNs) and revealing potential security vulnerabilities. Adversarial example generation methods based on Generative Adversarial Networks (GANs) have made significant progress in generating image adversarial examples, but still suffer from insufficient [...] Read more.
Adversarial examples are crucial tools for assessing the robustness of deep neural networks (DNNs) and revealing potential security vulnerabilities. Adversarial example generation methods based on Generative Adversarial Networks (GANs) have made significant progress in generating image adversarial examples, but still suffer from insufficient sparsity and transferability. To address these issues, this study proposes a novel semi-white-box untargeted adversarial example generation method named Wavelet-AdvGAN, with an explicit threat model defined as follows. Specifically, the attack is strictly untargeted without predefined target categories, aiming solely to mislead DNNs into classifying adversarial examples into any category other than the original label. It adopts a semi-white-box setting where attackers are denied access to the target model’s private information. Regarding the generator’s information dependence, the training phase only utilizes public resources (i.e., the target model’s public architecture and CIFAR-10 public training data), while the test phase generates adversarial examples through one-step feedforward of clean images without interacting with the target model. The method incorporates a Frequency Sub-band Difference (FSD) module and a Wavelet Transform Local Feature (WTLF) extraction module, evaluating the differences between original and adversarial examples from the frequency domain perspective. This approach constrains the magnitude of perturbations, reinforces feature regions, and further enhances the attack effectiveness, thereby improving the sparsity and transferability of adversarial examples. Experimental results demonstrate that the Wavelet-AdvGAN method achieves an average increase of 1.26% in attack success rates under two defense strategies—data augmentation and adversarial training. Additionally, the adversarial transferability improves by an average of 2.7%. Moreover, the proposed method exhibits a lower l0 norm, indicating better perturbation sparsity. Consequently, it effectively evaluates the robustness of deep neural networks. Full article
Show Figures

Figure 1

16 pages, 3389 KB  
Article
Hybrid White-Box/Black-Box Modeling and Control of a CO2 Heat Pump System Using Modelica and Deep Learning: A Case Study on Return-Water Temperature Control
by Ge Song, Qian Zhang and Natasa Nord
Energies 2026, 19(4), 908; https://doi.org/10.3390/en19040908 - 9 Feb 2026
Viewed by 362
Abstract
This study presents a hybrid modeling framework integrating a deep learning-based black-box model of a CO2 heat pump with a physics-based white-box system model developed in Modelica. The approach reduces the complexity of thermodynamic modeling while maintaining system-level accuracy. A deep neural [...] Read more.
This study presents a hybrid modeling framework integrating a deep learning-based black-box model of a CO2 heat pump with a physics-based white-box system model developed in Modelica. The approach reduces the complexity of thermodynamic modeling while maintaining system-level accuracy. A deep neural network (DNN) trained on measured data predicts outlet temperatures and compressor power, coupled with the Modelica model through the Functional Mock-up Unit (FMU) interface. The framework was applied to a ground-source CO2 heat pump system in Oslo, Norway, to evaluate hysteresis-based control strategies with different return temperature ranges (20–50 °C, 20–55 °C, 20–70 °C) and flow rates (1.3–1.5 kg/s). Results showed similar total heating but 25% lower compressor energy use for the 20–50 °C, 1.5 kg/s case compared to 20–70 °C. Temperature-based control improved coefficient of performance (COP) of the heat pump, while narrower temperature ranges and lower flow rates enhanced tank stratification and heat utilization. The findings demonstrate the effectiveness of the hybrid model for dynamic simulation and control optimization of CO2 heat pump systems. Full article
Show Figures

Figure 1

Back to TopTop