Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,158)

Search Parameters:
Keywords = Aggregate data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 97018 KB  
Article
Identifying Fresh Groundwater Potential in Unconfined Aquifers in Arid Central Asia: A Remote Sensing and Geo-Information Modeling Approach
by Evgeny Sotnikov, Zhuldyzbek Onglassynov, Kanat Kanafin, Ronny Berndtsson, Valentina Rakhimova, Oxana Miroshnichenko, Shynar Gabdulina and Kamshat Tussupova
Water 2025, 17(20), 2985; https://doi.org/10.3390/w17202985 - 16 Oct 2025
Abstract
Arid regions in Central Asia face persistent and increasing water scarcity, with groundwater serving as the primary source for drinking water, irrigation, and industry. The effective exploration and management of groundwater resources are critical, but are constrained by limited monitoring infrastructure and complex [...] Read more.
Arid regions in Central Asia face persistent and increasing water scarcity, with groundwater serving as the primary source for drinking water, irrigation, and industry. The effective exploration and management of groundwater resources are critical, but are constrained by limited monitoring infrastructure and complex hydrogeological settings. This study investigates the Akbakay aquifer, a representative area within Central Asia with challenging hydrogeological conditions, to delineate potential zones for fresh groundwater exploration. A multi-criteria decision analysis was conducted by integrating the Analytical Hierarchy Process (AHP) with Geographic Information Systems (GIS), supported by remote sensing datasets. To address the subjectivity of weight assignment, the AHP results were further validated using Monte Carlo simulations and fuzzy logic aggregation (Fuzzy Gamma). The integrated approach revealed stable high-suitability groundwater zones that consistently stand out across deterministic, probabilistic, and fuzzy assessments, thereby improving the reliability of the groundwater potential mapping. The findings demonstrate the applicability of combined AHP–GIS methods enhanced with uncertainty analysis for sustainable groundwater resource management in data-scarce arid regions of Central Asia. Full article
(This article belongs to the Special Issue Regional Geomorphological Characteristics and Sedimentary Processes)
Show Figures

Figure 1

21 pages, 425 KB  
Article
Model-Free Feature Screening Based on Data Aggregation for Ultra-High-Dimensional Longitudinal Data
by Junfeng Chen, Xiaoguang Yang, Jing Dai and Yunming Li
Stats 2025, 8(4), 99; https://doi.org/10.3390/stats8040099 (registering DOI) - 16 Oct 2025
Abstract
Ultra-high dimensional longitudinal data feature screening procedures are widely studied, but most require model assumptions. The screening performance of these methods may not be excellent if we specify an incorrect model. To resolve the above problem, a new model-free method is introduced where [...] Read more.
Ultra-high dimensional longitudinal data feature screening procedures are widely studied, but most require model assumptions. The screening performance of these methods may not be excellent if we specify an incorrect model. To resolve the above problem, a new model-free method is introduced where feature screening is performed by sample splitting and data aggregation. Distance correlation is used to measure the association at each time point separately, while longitudinal correlation is modeled by a specific cumulative distribution function to achieve efficiency. In addition, we extend this new method to handle situations where the predictors are correlated. Both methods possess excellent asymptotic properties and are capable of handling longitudinal data with unequal numbers of repeated measurements and unequal intervals between repeated measurement time points. Compared to other model-free methods, the two new methods are relatively insensitive to within-subject correlation, and they can help reduce the computational burden when applied to longitudinal data. Finally, we use some simulated and empirical examples to show that both new methods have better screening performance. Full article
Show Figures

Figure 1

27 pages, 21611 KB  
Article
Aggregation in Ill-Conditioned Regression Models: A Comparison with Entropy-Based Methods
by Ana Helena Tavares, Ana Silva, Tiago Freitas, Maria Costa, Pedro Macedo and Rui A. da Costa
Entropy 2025, 27(10), 1075; https://doi.org/10.3390/e27101075 - 16 Oct 2025
Abstract
Despite the advances on data analysis methodologies in the last decades, most of the traditional regression methods cannot be directly applied to large-scale data. Although aggregation methods are especially designed to deal with large-scale data, their performance may be strongly reduced in ill-conditioned [...] Read more.
Despite the advances on data analysis methodologies in the last decades, most of the traditional regression methods cannot be directly applied to large-scale data. Although aggregation methods are especially designed to deal with large-scale data, their performance may be strongly reduced in ill-conditioned problems (due to collinearity issues). This work compares the performance of a recent approach based on normalized entropy, a concept from information theory and info-metrics, with bagging and magging, two well-established aggregation methods in the literature, providing valuable insights for applications in regression analysis with large-scale data. While the results reveal a similar performance between methods in terms of prediction accuracy, the approach based on normalized entropy largely outperforms the other methods in terms of precision accuracy, even considering a smaller number of groups and observations per group, which represents an important advantage in inference problems with large-scale data. This work also alerts for the risk of using the OLS estimator, particularly under collinearity scenarios, knowing that data scientists frequently use linear models as a simplified view of the reality in big data analysis, and the OLS estimator is routinely used in practice. Beyond the promising findings of the simulation study, our estimation and aggregation strategies show strong potential for real-world applications in fields such as econometrics, genomics, environmental sciences, and machine learning, where data challenges such as noise and ill-conditioning are persistent. Full article
Show Figures

Figure 1

17 pages, 1040 KB  
Article
Evaluating NSQIP Outcomes According to the Clavien–Dindo Classification: A Model to Estimate Global Outcome Measures Following Hepatopancreaticobiliary Surgery
by Kevin Verhoeff, Sukhdeep Jatana, Ahmer Irfan and Gonzalo Sapisochin
Livers 2025, 5(4), 50; https://doi.org/10.3390/livers5040050 (registering DOI) - 16 Oct 2025
Abstract
Background: The National Surgical Quality Improvement Program (NSQIP) database provides one of the largest repositories of surgical outcome data—guiding local, national, and international quality improvement and research. We aim to describe a model to estimate Clavien–Dindo complication (CDC) rates from NSQIP data to [...] Read more.
Background: The National Surgical Quality Improvement Program (NSQIP) database provides one of the largest repositories of surgical outcome data—guiding local, national, and international quality improvement and research. We aim to describe a model to estimate Clavien–Dindo complication (CDC) rates from NSQIP data to enable comprehensive outcome measurement, allowing an NSQIP-based surrogate measure for longer-term outcomes. Methods: This is a validation study of a model to estimate CDCs from NSQIP data for pancreaticoduodenectomy (PD) and hepatic resection (HR). The primary objective of this study is to evaluate whether our method to estimate CDCs ≥ 3 outcomes from NSQIP data results in similar serious complication rates to large benchmark studies on outcomes following PD and HR. Secondary outcomes evaluate whether specific NSQIP outcomes provide adequate information to estimate CDC grades I-V following PD and HR. Results: We evaluated 20,575 patients undergoing PD, with 71.3% having pancreatic ductal adenocarcinoma. Comparing CDCs ≥ 3 complications for NSQIP and benchmark PD patients, we estimated a 23.2% rate with our model, which was significantly lower than the reported 27.6% in the benchmark study (p < 0.001). Additionally, the benchmark reported higher complication rates for every CDC grade compared to our estimates using NSQIP PD patients (p < 0.001). Further, we evaluated 29,809 patients within NSQIP undergoing HR, where most patients with a diagnosis listed had colorectal cancer metastases (30.8%). Compared to the benchmark HR study (n = 2159), the NSQIP patients were less likely to have hepatic resection for malignancy (57.7% vs. 84.0%; p < 0.001). Comparing CDCs ≥ 3 complications following HR demonstrated that rates were clinically similar (13.0% vs. 15.8%) but statistically different between the benchmark study and NSQIP data (p < 0.001). Additionally, the NSQIP patients had lower rates of estimated complications for nearly all CDC grades (p < 0.001). Conclusions: This is the first reported method to estimate aggregate morbidity from NSQIP data. Results demonstrate that despite differences in this and comparator cohorts, this model may underestimate CDC grade 1–2 complications but provide similar rates of CDCs ≥ 3 complications compared to benchmark studies. Future studies to validate or modify this estimation method are warranted and may allow extrapolation of short-term NSQIP measures to oncologic, quality of life, and long-term outcomes. Full article
Show Figures

Figure 1

29 pages, 5708 KB  
Article
Exploring the Spatiotemporal Impact of Landscape Patterns on Carbon Emissions Based on the Geographically and Temporally Weighted Regression Model: A Case Study of the Yellow River Basin in China
by Junhui Hu, Yang Du, Yueshan Ma, Danfeng Liu, Jingwei Yu and Zefu Miao
Sustainability 2025, 17(20), 9140; https://doi.org/10.3390/su17209140 (registering DOI) - 15 Oct 2025
Abstract
In promoting the “dual-carbon goals” and sustainable development strategy, analyzing the spatio-temporal response mechanism of landscape patterns to carbon emissions is a critical foundation for achieving carbon emission reductions. However, existing research primarily targets urbanized zones or individual ecosystem types, often overlooking how [...] Read more.
In promoting the “dual-carbon goals” and sustainable development strategy, analyzing the spatio-temporal response mechanism of landscape patterns to carbon emissions is a critical foundation for achieving carbon emission reductions. However, existing research primarily targets urbanized zones or individual ecosystem types, often overlooking how landscape pattern affects carbon emissions across entire watersheds. This research examines spatial–temporal characteristics of carbon emissions and landscape patterns in China’s Yellow River Basin, utilizing Kernel Density Estimation, Moran’s I, and landscape indices. The Geographically and Temporally Weighted Regression model is used to analyze the impact of landscape patterns and their spatial–temporal changes, and recommendations for sustainable low-carbon development planning are made accordingly. The findings indicate the following: (1) The overall carbon emissions show a spatial pattern of “low upstream, high midstream and medium downstream”, with obvious spatial clustering characteristics. (2) The degree of fragmentation in the upstream area decreases, and the aggregation and heterogeneity increase; the landscape fragmentation in the midstream area increases, the aggregation decreases, and the diversity increases; the landscape pattern in the downstream area is generally stable, and the diversity increases. (3) The number of patches, staggered adjacency index, separation index, connectivity index and modified Simpson’s evenness index are positively correlated with carbon emissions; landscape area, patch density, maximum number of patches, and average shape index are negatively correlated with carbon emissions; the distribution of areas positively or negatively correlated with average patch area is more balanced, while the spread index shows a nonlinear relationship. (4) The effects of landscape pattern indices on carbon emissions exhibit substantial spatial heterogeneity. For example, the negative impact of landscape area expands upstream, patch density maintains a strengthened negative effect downstream, and the diversity index shifts from negative to positive in the upper reaches but remains stable downstream. This study offers scientific foundation and data support for optimizing landscape patterns and promoting low-carbon sustainable development in the basin, aiding in the establishment of carbon reduction strategies. Full article
Show Figures

Figure 1

21 pages, 7199 KB  
Article
A High-Resolution Dynamic Marine Traffic Flow Visualization Model Using AIS Data
by Do Hyun Oh, Fan Zhu and Namkyun Im
J. Mar. Sci. Eng. 2025, 13(10), 1971; https://doi.org/10.3390/jmse13101971 - 15 Oct 2025
Abstract
The introduction of Maritime Autonomous Surface Ships (MASS) and the accelerating digitalization of ports require precise and dynamic analysis of traffic conditions. However, conventional marine traffic analyses have been limited to low-resolution grids and static density visualizations without fully integrating vessel direction and [...] Read more.
The introduction of Maritime Autonomous Surface Ships (MASS) and the accelerating digitalization of ports require precise and dynamic analysis of traffic conditions. However, conventional marine traffic analyses have been limited to low-resolution grids and static density visualizations without fully integrating vessel direction and speed. To address this limitation, this study proposes a traffic flow visualization model that incorporates dynamic maritime traffic structure. The model integrates density, dominant direction, and average speed into a single symbol, thereby complementing the limitations of static analyses. In addition, high-resolution grids of approximately 90 m were applied to enable detailed analysis. AIS data collected between 2022–2023 from the coastal waters of Mokpo, South Korea, were preprocessed, aggregated into grid cells, and analyzed to estimate representative directions (at 10° intervals) as well as average speeds. These results were visualized through color, thickness, length, and direction of arrows. The analysis showed high-density, low-speed traffic patterns and starboard-passage behavior in port approaches and narrow channels, while irregular directions with low density were observed in non-standard routes. The proposed model provides a visual representation of dynamic traffic structures that cannot be revealed by density maps alone, thus offering practical applicability for MASS route planning, VTS operation support, and risk assessment. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

37 pages, 8931 KB  
Article
Predicting the Properties of Polypropylene Fiber Recycled Aggregate Concrete Using Response Surface Methodology and Machine Learning
by Hany A. Dahish and Mohammed K. Alkharisi
Buildings 2025, 15(20), 3709; https://doi.org/10.3390/buildings15203709 - 15 Oct 2025
Abstract
The use of recycled coarse aggregate (RCA) concrete and polypropylene fibers (PPFs) presents a sustainable alternative in concrete production. However, the non-linear and interactive effects of RCA and PPF on both fresh and hardened properties are not yet fully quantified. This study employs [...] Read more.
The use of recycled coarse aggregate (RCA) concrete and polypropylene fibers (PPFs) presents a sustainable alternative in concrete production. However, the non-linear and interactive effects of RCA and PPF on both fresh and hardened properties are not yet fully quantified. This study employs Response Surface Methodology (RSM) and the Random Forest (RF) algorithm with K-fold cross-validation to predict the combined effect of using recycled coarse aggregate (RCA) as a partial replacement for natural coarse aggregate and polypropylene fiber (PPF) on the engineering properties of RCA-PPF concrete, addressing the critical need for a robust, data-driven modeling framework. A dataset of 144 tested samples obtained from literature was utilized to develop and validate the prediction models. Three input variables were considered in developing the proposed prediction models, namely, RCA, PPF, and curing age (Age). The examined responses were compressive strength (CS), tensile strength (TS), ultrasonic pulse velocity (UPV), and water absorption (WA). To assess the developed models, statistical metrics were calculated, and analysis of variance (ANOVA) was employed. Afterwards, the responses were optimized using optimization in RSM. The optimal results of responses by maximizing TS, CS, and UPV and minimizing WA were achieved at a PPF of 3% by volume of concrete and an RCA of approximately 100% replacing natural coarse aggregate, highlighting optimal reuse of recycled aggregate, with an AGE of 83.6 days. The RF model demonstrated superior performance, significantly outperforming the RSM model. Feature importance analysis via SHAP values was employed to identify the most effective parameters on the predictions. The results confirm that ML techniques provide a powerful and accurate tool for optimizing sustainable concrete mixes. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

27 pages, 1063 KB  
Article
FLEX-SFL: A Flexible and Efficient Split Federated Learning Framework for Edge Heterogeneity
by Hao Yu, Jing Fan, Hua Dong, Yadong Jin, Enkang Xi and Yihang Sun
Sensors 2025, 25(20), 6355; https://doi.org/10.3390/s25206355 - 14 Oct 2025
Abstract
The deployment of Federated Learning (FL) in edge environments is often impeded by system heterogeneity, non-independent and identically distributed (non-IID) data, and constrained communication resources, which collectively hinder training efficiency and scalability. To address these challenges, this paper presents FLEX-SFL, a flexible and [...] Read more.
The deployment of Federated Learning (FL) in edge environments is often impeded by system heterogeneity, non-independent and identically distributed (non-IID) data, and constrained communication resources, which collectively hinder training efficiency and scalability. To address these challenges, this paper presents FLEX-SFL, a flexible and efficient split federated learning framework that jointly optimizes model partitioning, client selection, and communication scheduling. FLEX-SFL incorporates three coordinated mechanisms: a device-aware adaptive segmentation strategy that dynamically adjusts model partition points based on client computational capacity to mitigate straggler effects; an entropy-driven client selection algorithm that promotes data representativeness by leveraging label distribution entropy; and a hierarchical local asynchronous aggregation scheme that enables asynchronous intra-cluster and inter-cluster model updates to improve training throughput and reduce communication latency. We theoretically establish the convergence properties of FLEX-SFL under convex settings and analyze the influence of local update frequency and client participation on convergence bounds. Extensive experiments on benchmark datasets including FMNIST, CIFAR-10, and CIFAR-100 demonstrate that FLEX-SFL consistently outperforms state-of-the-art FL and split FL baselines in terms of model accuracy, convergence speed, and resource efficiency, particularly under high degrees of statistical and system heterogeneity. These results validate the effectiveness and practicality of FLEX-SFL for real-world edge intelligent systems. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

31 pages, 1516 KB  
Article
Federated Quantum Machine Learning for Distributed Cybersecurity in Multi-Agent Energy Systems
by Kwabena Addo, Musasa Kabeya and Evans Eshiemogie Ojo
Energies 2025, 18(20), 5418; https://doi.org/10.3390/en18205418 - 14 Oct 2025
Abstract
The increasing digitization and decentralization of modern energy systems have heightened their vulnerability to sophisticated cyber threats, necessitating advanced, scalable, and privacy-preserving detection frameworks. This paper introduces a novel Federated Quantum Machine Learning (FQML) framework tailored for anomaly detection in multi-agent energy environments. [...] Read more.
The increasing digitization and decentralization of modern energy systems have heightened their vulnerability to sophisticated cyber threats, necessitating advanced, scalable, and privacy-preserving detection frameworks. This paper introduces a novel Federated Quantum Machine Learning (FQML) framework tailored for anomaly detection in multi-agent energy environments. By integrating parameterized quantum circuits (PQCs) at the local agent level with secure federated learning protocols, the framework enhances detection accuracy while preserving data privacy. A trimmed-mean aggregation scheme and differential privacy mechanisms are embedded to defend against Byzantine behaviors and data-poisoning attacks. The problem is formally modeled as a constrained optimization task, accounting for quantum circuit depth, communication latency, and adversarial resilience. Experimental validation on synthetic smart grid datasets demonstrates that FQML achieves high detection accuracy (≥96.3%), maintains robustness under adversarial perturbations, and reduces communication overhead by 28.6% compared to classical federated baselines. These results substantiate the viability of quantum-enhanced federated learning as a practical, hardware-conscious approach to distributed cybersecurity in next-generation energy infrastructures. Full article
Show Figures

Graphical abstract

42 pages, 2226 KB  
Article
Sustainable Component-Level Prioritization of PV Panels, Batteries, and Converters for Solar Technologies in Hybrid Renewable Energy Systems Using Objective-Weighted MCDM Models
by Swapandeep Kaur, Raman Kumar and Kanwardeep Singh
Energies 2025, 18(20), 5410; https://doi.org/10.3390/en18205410 - 14 Oct 2025
Abstract
Data-driven prioritization of photovoltaic (PV), battery, and converter technologies is crucial for achieving sustainability, efficiency, and cost-effectiveness in the increasingly complex domain of hybrid renewable energy systems (HRES). Conducting an in-depth and systematic ranking of these components for solar-based HRESs necessitates a comprehensive [...] Read more.
Data-driven prioritization of photovoltaic (PV), battery, and converter technologies is crucial for achieving sustainability, efficiency, and cost-effectiveness in the increasingly complex domain of hybrid renewable energy systems (HRES). Conducting an in-depth and systematic ranking of these components for solar-based HRESs necessitates a comprehensive multi-criteria decision-making (MCDM) framework. This study develops as the most recent and integrated approach available in the literature. To ensure balanced and objective weighting, five quantitative weighting techniques, Entropy, Standard Deviation, CRITIC, MEREC, and CILOS, were aggregated through the Bonferroni operator, thereby minimizing subjective bias while preserving robustness. The final ranking was executed using the measurement of alternatives and ranking according to compromise solution method (MARCOS). Subsequently, comparative validation was conducted across eight additional MCDM methods, supplemented by correlation and sensitivity analysis to evaluate the consistency and reliability of the obtained results. The results revealed that thin-film PV modules (0.7108), hybrid supercapacitor batteries (0.6990), and modular converters (1.1812) emerged as the top-performing technologies, reflecting optimal trade-offs among technical, economic, and environmental performance criteria. Correlation analysis (ρ > 0.9 across nine MCDM methods) confirmed the stability of the rankings. The results establish a reproducible decision-support framework for designing sustainable hybrid systems. These technologies demonstrated superior thermal stability, cycling endurance, and system scalability, respectively, thus laying a foundation for more sustainable and resilient hybrid energy system deployments. The proposed framework provides a reproducible, transparent, and resilient decision-support tool designed to assist engineers, researchers, and policy-makers in developing reliable low-carbon components for the realization of future carbon-neutral energy infrastructures. Full article
(This article belongs to the Section A2: Solar Energy and Photovoltaic Systems)
Show Figures

Figure 1

21 pages, 6020 KB  
Article
Trees as Sensors: Estimating Wind Intensity Distribution During Hurricane Maria
by Vivaldi Rinaldi, Giovanny Motoa and Masoud Ghandehari
Remote Sens. 2025, 17(20), 3428; https://doi.org/10.3390/rs17203428 - 14 Oct 2025
Abstract
Hurricane Maria crossed Puerto Rico with winds as high as 250 km/h, resulting in widespread damages and loss of weather station data, thus limiting direct weather measurements of wind variability. Here, we identified more than 155 million trees to estimate the distribution of [...] Read more.
Hurricane Maria crossed Puerto Rico with winds as high as 250 km/h, resulting in widespread damages and loss of weather station data, thus limiting direct weather measurements of wind variability. Here, we identified more than 155 million trees to estimate the distribution of wind speed over 9000 km2 of land from island-wide LiDAR point clouds collected before and after the hurricane. The point clouds were classified and rasterized into the canopy height model to perform individual tree identification and perform change detection analysis. Individual trees’ stem diameter at breast height were estimated using a function between delineated crown and extracted canopy height, validated using the records from Puerto Rico’s Forest Inventory 2003. The results indicate that approximately 35.7% of trees broke at the stem (below the canopy center) and 28.5% above the canopy center. Furthermore, we back-calculated the critical wind speed, or the minimum speed to cause breakage, at individual tree level this was performed by applying a mechanical model using the estimated diameter at breast height, the extrapolated breakage height, and pre-Hurricane Maria canopy height. Individual trees were then aggregated at 115 km2 cells to summarize the critical wind speed distribution of each cell, based on the percentage of stem breakage. A vertical wind profile analysis was then applied to derive the hurricane wind distribution using the mean hourly wind speed 10 m above the canopy center. The estimated wind speed ranges from 250 km/h in the southeast at the landfall to 100 km/h in the southwest parts of the islands. Comparison of the modeled wind speed with the wind gust readings at the few remaining NOAA stations support the use of tree breakages to model the distribution of hurricane wind speed when ground readings are sparse. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

26 pages, 1008 KB  
Article
FedECPA: An Efficient Countermeasure Against Scaling-Based Model Poisoning Attacks in Blockchain-Based Federated Learning
by Rukayat Olapojoye, Tara Salman, Mohamed Baza and Ali Alshehri
Sensors 2025, 25(20), 6343; https://doi.org/10.3390/s25206343 - 14 Oct 2025
Abstract
Artificial intelligence (AI) and machine learning (ML) have become integral to various applications, leveraging vast amounts of heterogeneous, globally distributed Internet of Things (IoT) data to identify patterns and build accurate ML models for predictive tasks. Federated learning (FL) is a distributed ML [...] Read more.
Artificial intelligence (AI) and machine learning (ML) have become integral to various applications, leveraging vast amounts of heterogeneous, globally distributed Internet of Things (IoT) data to identify patterns and build accurate ML models for predictive tasks. Federated learning (FL) is a distributed ML technique developed to learn from such distributed data while ensuring privacy. Nevertheless, traditional FL requires a central server for aggregation, which can be a central point of failure and raises trust issues. Blockchain-based federated learning (BFL) has emerged as an FL extension that provides guaranteed decentralization alongside other security assurances. However, due to the inherent openness of blockchain, BFL comes with several vulnerabilities that remain unexplored in literature, e.g., a higher possibility of model poisoning attacks. This paper investigates how scaling-based model poisoning attacks are made easier in BFL systems and their effects on model performance. Subsequently, it proposes FedECPA-an extension of FedAvg aggregation algorithm with Efficient Countermeasure against scaling-based model Poisoning Attacks in BFL. FedECPA filters out clients with outlier weights and protects the model against these attacks. Several experiments are conducted with different attack scenarios and settings. We further compared our results to a frequently used defense mechanism, Multikrum. Results show the effectiveness of our defense mechanism in protecting BFL from these attacks. On the MNIST dataset, it maintains an overall accuracy of 98% and 89% and outperforms our baseline with 4% and 38% in both IID and non-IID settings, respectively. Similar results were achieved with the CIFAR-10 dataset. Full article
Show Figures

Figure 1

21 pages, 1038 KB  
Article
Climate-Resilient City Pilot Programs and New-Quality Productivity: Causal Identification Based on Dual Machine Learning
by Yangchun Cao, Wenfeng Chen, Yating Tian and Yuqiang Zhang
Sustainability 2025, 17(20), 9088; https://doi.org/10.3390/su17209088 (registering DOI) - 14 Oct 2025
Abstract
Climate change is a critical constraint on the development of new-quality productive forces (NQPFs), making it essential to clarify its relationship with urban development strategies to enhance productivity. Using panel data from 284 Chinese cities during 2010–2022, this study leverages the climate-resilient city [...] Read more.
Climate change is a critical constraint on the development of new-quality productive forces (NQPFs), making it essential to clarify its relationship with urban development strategies to enhance productivity. Using panel data from 284 Chinese cities during 2010–2022, this study leverages the climate-resilient city pilot policy as a quasi-natural experiment and applies a double machine learning approach to estimate both the causal impact and underlying mechanisms of this policy on NQPFs. We further examine heterogeneous effects across geographic regions and city types. Our findings show that first, climate-resilient urban development significantly boosts NQPFs, with results remaining robust across multiple sensitivity tests. Second, this effect operates through three key channels—talent agglomeration, data flow enhancement, and infrastructure-related industrial upgrading. Third, the policy’s impact is stronger in western and coastal cities; resource-based cities and non-environmentally protected cities exhibit greater responsiveness, amplifying the positive outcomes. This study provides systematic empirical evidence on the nexus between climate resilience and high-quality development, offering actionable insights for designing localized strategies to advance climate-resilient urbanization and foster high-quality productive forces. Full article
Show Figures

Figure 1

24 pages, 1535 KB  
Article
Enhanced Distributed Multimodal Federated Learning Framework for Privacy-Preserving IoMT Applications: E-DMFL
by Dagmawit Tadesse Aga and Madhuri Siddula
Electronics 2025, 14(20), 4024; https://doi.org/10.3390/electronics14204024 - 14 Oct 2025
Viewed by 25
Abstract
The rapid growth of Internet of Medical Things (IoMT) devices offers promising avenues for real-time, personalized healthcare while also introducing critical challenges related to data privacy, device heterogeneity, and deployment scalability. This paper presents E-DMFL (Enhanced Distributed Multimodal Federated Learning), an Enhanced Distributed [...] Read more.
The rapid growth of Internet of Medical Things (IoMT) devices offers promising avenues for real-time, personalized healthcare while also introducing critical challenges related to data privacy, device heterogeneity, and deployment scalability. This paper presents E-DMFL (Enhanced Distributed Multimodal Federated Learning), an Enhanced Distributed Multimodal Federated Learning framework designed to address these issues. Our approach combines systems analysis principles with intelligent model design, integrating PyTorch-based modular orchestration and TensorFlow-style data pipelines to enable multimodal edge-based training. E-DMFL incorporates gated attention fusion, differential privacy, Shapley-value-based modality selection, and peer-to-peer communication to facilitate secure and adaptive learning in non-IID environments. We evaluate the framework using the EarSAVAS dataset, which includes synchronized audio and motion signals from ear-worn sensors. E-DMFL achieves a test accuracy of 92.0% in just six communication rounds. The framework also supports energy-efficient and real-time deployment through quantization-aware training and battery-aware scheduling. These results demonstrate the potential of combining systems-level design with federated learning (FL) innovations to support practical, privacy-aware IoMT applications. Full article
Show Figures

Figure 1

21 pages, 4746 KB  
Article
Optimizing Steel Industry and Air Conditioning Clusters Using Coordination-Based Time-Series Fusion Transformer
by Xinyu Luo, Zhaofan Zhou, Bin Li, Yumeng Zhang, Chenle Yi, Kun Shi and Songsong Chen
Processes 2025, 13(10), 3265; https://doi.org/10.3390/pr13103265 - 13 Oct 2025
Viewed by 116
Abstract
The steel industry, a typical energy-intensive sector, experiences significant load power fluctuations, particularly during peak periods, posing challenges to power-grid stability. Traditional studies often overlook its unique production characteristics, limiting a comprehensive understanding of power fluctuations. Meanwhile, air conditioning (AC), as a flexible [...] Read more.
The steel industry, a typical energy-intensive sector, experiences significant load power fluctuations, particularly during peak periods, posing challenges to power-grid stability. Traditional studies often overlook its unique production characteristics, limiting a comprehensive understanding of power fluctuations. Meanwhile, air conditioning (AC), as a flexible load, offers stable regulation with an aggregation effect. This study explores the potential for coordinated load dispatch between the steel industry and air conditioning clusters to enhance power system flexibility. A power characteristic model for steel loads was developed based on energy consumption patterns, while a physical ETP model aggregated air conditioning loads. To improve forecasting accuracy, a parallel LSTM-Transformer model predicts both steel and air conditioning loads. CEEMDAN-VMD decomposition reduces noise in steel-load data, and the QR algorithm computes confidence intervals for load responses. The study further examines interactions between electric-arc furnace control strategies and air conditioning demand response. Case studies using real-world data demonstrate that the proposed model enhances prediction accuracy, peak suppression, and variance reduction. These findings provide insights into steel industry power fluctuations and large-scale air conditioning load adjustments. Full article
Show Figures

Figure 1

Back to TopTop