Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (60)

Search Parameters:
Keywords = default clusters

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 677 KB  
Systematic Review
Quantifying Statistical Heterogeneity and Reproducibility in Cooperative Multi-Agent Reinforcement Learning: A Meta-Analysis of the SMAC Benchmark
by Rex Li and Chunyu Liu
Algorithms 2025, 18(10), 653; https://doi.org/10.3390/a18100653 - 16 Oct 2025
Viewed by 327
Abstract
This study presents the first quantitative meta-analysis in cooperative multi-agent reinforcement learning (MARL). Undertaken on the StarCraft Multi-Agent Challenge (SMAC) benchmark, we quantify reproducibility and statistical heterogeneity across studies using the five algorithms introduced in the original SMAC paper (IQL, VDN, QMIX, COMA, [...] Read more.
This study presents the first quantitative meta-analysis in cooperative multi-agent reinforcement learning (MARL). Undertaken on the StarCraft Multi-Agent Challenge (SMAC) benchmark, we quantify reproducibility and statistical heterogeneity across studies using the five algorithms introduced in the original SMAC paper (IQL, VDN, QMIX, COMA, QTRAN) on five widely used maps at a fixed 2M-step budget. The analysis pools win rates via multilevel mixed-effects meta-regression with cluster-robust variance and reports Algorithm × Map cell-specific heterogeneity and 95% prediction intervals. Results show that heterogeneity is pervasive: 17/25 cells exhibit high heterogeneity (I2 ≥ 80%), indicating between-study variance dominates sampling error. Moderator analyses find publication year significantly explains part of residual variance, consistent with secular drift in tooling and defaults. Prediction intervals are broad across most cells, implying a new study can legitimately exhibit substantially lower or higher performance than pooled means. The study underscores the need for standardized reporting (SC2 versioning, evaluation episode counts, hyperparameters), preregistered map panels, open code/configurations, and machine-readable curves to enable robust, heterogeneity-aware synthesis and more reproducible SMAC benchmarking. Full article
Show Figures

Figure 1

28 pages, 1156 KB  
Article
Financial Systemic Risk and the COVID-19 Pandemic
by Xin Huang
Risks 2025, 13(9), 169; https://doi.org/10.3390/risks13090169 - 4 Sep 2025
Viewed by 726
Abstract
The COVID-19 pandemic has caused market turmoil and economic distress. To understand the effect of the pandemic on the U.S. financial systemic risk, we analyze the explanatory power of detailed COVID-19 data on three market-based systemic risk measures (SRMs): Conditional Value at Risk, [...] Read more.
The COVID-19 pandemic has caused market turmoil and economic distress. To understand the effect of the pandemic on the U.S. financial systemic risk, we analyze the explanatory power of detailed COVID-19 data on three market-based systemic risk measures (SRMs): Conditional Value at Risk, Distress Insurance Premium, and SRISK. In the time-series dimension, we use the Dynamic OLS model and find that financial variables, such as credit default swap spreads, equity correlation, and firm size, significantly affect the SRMs, but the COVID-19 variables do not appear to drive the SRMs. However, if we focus on the first wave of the COVID-19 pandemic in March 2020, we find a positive and significant COVID-19 effect, especially before the government interventions. In the cross-sectional dimension, we run fixed-effect and event-study regressions with clustered variance-covariance matrices. We find that market capitalization helps to reduce a firm’s contribution to the SRMs, while firm size significantly predicts the surge in a firm’s SRM contribution when the pandemic first hits the system. The policy implications include that proper market interventions can help to mitigate the negative pandemic effect, and policymakers should continue the current regulation of required capital holding and consider size when designating systemically important financial institutions. Full article
Show Figures

Figure 1

23 pages, 2203 KB  
Review
Digital Academic Leadership in Higher Education Institutions: A Bibliometric Review Based on CiteSpace
by Olaniyi Joshua Olabiyi, Carl Jansen van Vuuren, Marieta Du Plessis, Yujie Xue and Chang Zhu
Educ. Sci. 2025, 15(7), 846; https://doi.org/10.3390/educsci15070846 - 2 Jul 2025
Cited by 1 | Viewed by 2255
Abstract
The continuous evolution of technology compels higher education leaders to adapt to VUCA (volatile, uncertain, complex, and ambiguous) and BANI (brittle, anxious, non-linear, and incomprehensible) environments through innovative strategies that ensure institutional relevance. While VUCA emphasizes the challenges posed by rapid change and [...] Read more.
The continuous evolution of technology compels higher education leaders to adapt to VUCA (volatile, uncertain, complex, and ambiguous) and BANI (brittle, anxious, non-linear, and incomprehensible) environments through innovative strategies that ensure institutional relevance. While VUCA emphasizes the challenges posed by rapid change and uncertain decision-making, BANI underscores the fragility of systems, heightened anxiety, unpredictable causality, and the collapse of established patterns. Navigating these complexities requires agility, resilience, and visionary leadership to ensure that institutions remain adaptable and future ready. This study presents a bibliometric analysis of digital academic leadership in higher education transformation, examining empirical studies, reviews, book chapters, and proceeding papers published from 2014 to 2024 (11-year period) in the Web of Science—Science Citation Index Expanded (SCIE) and Social Science Citation Index (SSCI). Using CiteSpace software (version 6.3. R1-64 bit), we analyzed 5837 documents, identifying 24 key publications that formed a network of 90 nodes and 256 links. The reduction to 24 publications occurred as part of a structured bibliometric analysis using CiteSpace, which employs algorithmic thresholds to identify the most influential and structurally significant publications within a large corpus. These 24 documents form the core co-citation network, which serves as a conceptual backbone for further thematic interpretation. This was the result of a multi-step refinement process using CiteSpace’s default thresholds and clustering algorithms to detect the most influential nodes based on centrality, citation burst, and network clustering. Our findings reveal six primary research clusters: “Enhancing Academic Performance”, “Digital Leadership Scale Adaptation”, “Construction Industry”, “Innovative Work Behavior”, “Development Business Strategy”, and “Education.” The analysis demonstrates a significant increase in publications over the decade, with the highest concentration in 2024, reflecting growing scholarly interest in this field. Keywords analysis shows “digital leadership”, “digital transformation”, “performance”, and “innovation” as dominant terms, highlighting the field’s evolution from technology-focused approaches to holistic leadership frameworks. Geographical analysis reveals significant contributions from Pakistan, Ireland, and India, indicating valuable insights emerging from diverse global contexts. These findings suggest that effective digital academic leadership requires not only technical competencies but also transformational capabilities, communication skills, and innovation management to enhance student outcomes and institutional performance in an increasingly digitalized educational landscape. Full article
Show Figures

Figure 1

16 pages, 12134 KB  
Article
Intelligent Dynamic Multi-Dimensional Heterogeneous Resource Scheduling Optimization Strategy Based on Kubernetes
by Jialin Cai, Hui Zeng, Feifei Liu and Junming Chen
Mathematics 2025, 13(8), 1342; https://doi.org/10.3390/math13081342 - 19 Apr 2025
Cited by 1 | Viewed by 1282
Abstract
In this paper, we tackle the challenge of optimizing resource utilization and demand-driven allocation in dynamic, multi-dimensional heterogeneous environments. Traditional containerized task scheduling systems, like Kubernetes, typically rely on default schedulers that primarily focus on CPU and memory, overlooking the multi-dimensional nature of [...] Read more.
In this paper, we tackle the challenge of optimizing resource utilization and demand-driven allocation in dynamic, multi-dimensional heterogeneous environments. Traditional containerized task scheduling systems, like Kubernetes, typically rely on default schedulers that primarily focus on CPU and memory, overlooking the multi-dimensional nature of heterogeneous resources such as GPUs, network I/O, and disk I/O. This results in suboptimal scheduling and underutilization of resources. To address this, we propose a dynamic scheduling method for heterogeneous resources using an enhanced Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) algorithm that adjusts weights in real time and applies nonlinear normalization. Leveraging parallel computing, approximation, incremental computation, local updates, and hardware acceleration, the method minimizes overhead and ensures efficiency. Experimental results showed that, under low-load conditions, our method reduced task response times by 31–36%, increased throughput by 20–50%, and boosted resource utilization by over 20% compared to both the default Kubernetes scheduler and the Kubernetes Container Scheduling Strategy (KCSS) algorithm. These improvements were tested across diverse workloads, utilizing CPU, memory, GPU, and I/O resources, in a large-scale cluster environment, demonstrating the method’s robustness. These enhancements optimize cluster performance and resource efficiency, offering valuable insights for task scheduling in containerized cloud platforms. Full article
Show Figures

Figure 1

13 pages, 696 KB  
Article
Fuzzy Non-Payment Risk Management Rooted in Optimized Household Consumption Units
by Gregorio Izquierdo Llanes and Antonio Salcedo
Risks 2025, 13(4), 74; https://doi.org/10.3390/risks13040074 - 11 Apr 2025
Viewed by 674
Abstract
Traditionally, business risk management models have not taken into consideration household composition for the purposes of credit granting or project financing in order to manage the risk of default. In this research, an improvement in the risk management model was obtained by introducing [...] Read more.
Traditionally, business risk management models have not taken into consideration household composition for the purposes of credit granting or project financing in order to manage the risk of default. In this research, an improvement in the risk management model was obtained by introducing household composition as a new exogenous variable. With the application of generalized reduced gradient nonlinear optimization modeling, improved consumption units are determined according to the different types of household size and the age of their members. Estimated household economies of scale show a consistent pattern even in the year 2020, corresponding with the COVID-19 outbreak. Thus, an adjusted estimation of the household equivalized disposable income is obtained. Based on this more accurate equivalized income estimation, acceptable debt levels can be determined. The estimation of probabilities of default allows the household risk of default to be managed. In this way, a novel model is proposed by incorporating household composition into credit risk evaluation using fuzzy clustering and optimization techniques. Companies can assess the expected loss of a credit exposure through a model that can help them in the process of making evidence-informed decisions. Full article
Show Figures

Figure 1

18 pages, 565 KB  
Article
Efficient Orchestration of Distributed Workloads in Multi-Region Kubernetes Cluster
by Radoslav Furnadzhiev, Mitko Shopov and Nikolay Kakanakov
Computers 2025, 14(4), 114; https://doi.org/10.3390/computers14040114 - 21 Mar 2025
Cited by 4 | Viewed by 1786
Abstract
Distributed Kubernetes clusters provide robust solutions for geo-redundancy and fault tolerance in modern cloud architectures. However, default scheduling mechanisms primarily optimize for resource availability, often neglecting network topology, inter-node latency, and global resource efficiency, leading to suboptimal task placement in multi-region deployments. This [...] Read more.
Distributed Kubernetes clusters provide robust solutions for geo-redundancy and fault tolerance in modern cloud architectures. However, default scheduling mechanisms primarily optimize for resource availability, often neglecting network topology, inter-node latency, and global resource efficiency, leading to suboptimal task placement in multi-region deployments. This paper proposes network-aware scheduling plugins that integrate heuristic, metaheuristic, and linear programming methods to optimize resource utilization and inter-zone communication latency for containerized workloads, particularly Apache Spark batch-processing tasks. Unlike the default scheduler, the presented approach incorporates inter-node latency constraints and prioritizes locality-aware scheduling, ensuring efficient pod distribution while minimizing network overhead. The proposed plugins are evaluated using the kube-scheduler-simulator, a tool that replicates Kubernetes scheduling behavior without deploying real workloads. Experiments cover multiple cluster configurations, varying in node count, region count, and inter-region latencies, with performance metrics recorded for scheduler efficiency, inter-zone communication impact, and execution time across different optimization algorithms. The obtained results indicate that network-aware scheduling approaches significantly improve latency-aware placement decisions, achieving lower inter-region communication delays while maintaining resource efficiency. Full article
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems (2nd Edition))
Show Figures

Figure 1

12 pages, 5052 KB  
Protocol
Automated Measurement of Grid Cell Firing Characteristics
by Nate M. Sutton, Blanca E. Gutiérrez-Guzmán, Holger Dannenberg and Giorgio A. Ascoli
Algorithms 2025, 18(3), 139; https://doi.org/10.3390/a18030139 - 3 Mar 2025
Cited by 2 | Viewed by 1021
Abstract
We describe GridMet as open-source software that automatically measures the spatial tuning parameters of grid cells, such as firing field size, spacing, and orientation angles. Applying these metrics to experimental data can help quantify changes in the geometric characteristics of grid cell firing [...] Read more.
We describe GridMet as open-source software that automatically measures the spatial tuning parameters of grid cells, such as firing field size, spacing, and orientation angles. Applying these metrics to experimental data can help quantify changes in the geometric characteristics of grid cell firing across experimental conditions. GridMet uses clustering and other advanced methods to detect and characterize fields, increasing accuracy compared to alternative methods such as those based on peak firing. Novel contributions of this work include an effective approach for automated field size estimation and an original method for estimating field spacing that can overcome challenges encountered in other software. The user-friendly yet flexible design of GridMet aims to facilitate widespread community adoption. Specifically, GridMet allows basic usage with default parameter settings while also enabling the expert configuration of many parameter values for more advanced applications. Free release of the MATLAB source code will encourage the development of custom variations or integration with other software packages. At the same time, we also provide a runtime version of GridMet, thus avoiding the requirement to purchase any separate licenses. We have optimized GridMet for batch scripting workflows to aid investigations of multi-trial data on multiple grid cells. Full article
(This article belongs to the Special Issue Advancements in Signal Processing and Machine Learning for Healthcare)
Show Figures

Figure 1

17 pages, 3628 KB  
Article
Optimizing Kubernetes Scheduling for Web Applications Using Machine Learning
by Vedran Dakić, Goran Đambić, Jurica Slovinac and Jasmin Redžepagić
Electronics 2025, 14(5), 863; https://doi.org/10.3390/electronics14050863 - 21 Feb 2025
Cited by 1 | Viewed by 2982
Abstract
Machine learning (ML) has significantly enhanced computing and optimization, offering solutions to complex challenges. This paper investigates the development of a custom Kubernetes scheduler employing ML to optimize web application placement. A cluster of five nodes was established for evaluation, utilizing Python and [...] Read more.
Machine learning (ML) has significantly enhanced computing and optimization, offering solutions to complex challenges. This paper investigates the development of a custom Kubernetes scheduler employing ML to optimize web application placement. A cluster of five nodes was established for evaluation, utilizing Python and TensorFlow to create and train a neural network that forecasts scheduling times for various configurations. The dataset, generated via scripts, encompassed multiple scenarios to ensure thorough model training. The results indicate that the custom scheduler with ML consistently surpasses the default Kubernetes scheduler in scheduling time by 1–18%, depending on the scenario. As expected, the difference between the built-in and ML-based scheduler becomes more evident with higher loads, underscoring opportunities for future research by using other ML algorithms and considering energy efficiency. Full article
Show Figures

Figure 1

25 pages, 6259 KB  
Article
Integration of Multi-Source Landslide Disaster Data Based on Flink Framework and APSO Load Balancing Task Scheduling
by Zongmin Wang, Huangtaojun Liang, Haibo Yang, Mengyu Li and Yingchun Cai
ISPRS Int. J. Geo-Inf. 2025, 14(1), 12; https://doi.org/10.3390/ijgi14010012 - 31 Dec 2024
Cited by 4 | Viewed by 1197
Abstract
As monitoring technologies and data collection methodologies advance, landslide disaster data reflects attributes such as diverse sources, heterogeneity, substantial volumes, and stringent real-time requirements. To bolster the data support capabilities for the monitoring, prevention, and management of landslide disasters, the efficient integration of [...] Read more.
As monitoring technologies and data collection methodologies advance, landslide disaster data reflects attributes such as diverse sources, heterogeneity, substantial volumes, and stringent real-time requirements. To bolster the data support capabilities for the monitoring, prevention, and management of landslide disasters, the efficient integration of multi-source heterogeneous data is of paramount importance. The present study proposes an innovative approach to integrate multi-source landslide disaster data by combining the Flink-oriented framework with load balancing task scheduling based on an improved particle swarm optimization (APSO) algorithm. It utilizes Flink’s streaming processing capabilities to efficiently process and store multi-source landslide data. To tackle the issue of uneven cluster load distribution during the integration process, the APSO algorithm is proposed to facilitate cluster load balancing. The findings indicate the following: (1) The multi-source data integration method for landslide disaster based on Flink and APSO proposed in this article, combined with the structural characteristics of landslide disaster data, adopts different integration methods for data in different formats, which can effectively achieve the integration of multi-source landslide data. (2) A multi-source landslide data integration framework based on Flink has been established. Utilizing Kafka as a message queue, a real-time data pipeline was constructed, with Flink facilitating data processing and read/write operations for the database. This implementation achieves efficient integration of multi-source landslide data. (3) Compared to Flink’s default task scheduling strategy, the cluster load balancing strategy based on APSO demonstrated a reduction of approximately 4.7% in average task execution time and an improvement of approximately 5.4% in average system throughput during actual tests using landslide data sets. The research findings illustrate a significant improvement in the efficiency of data integration processing and system performance. Full article
Show Figures

Figure 1

21 pages, 4248 KB  
Article
OOSP: Opportunistic Optimization Scheme for Pod Deployment Enhanced with Multilayered Sensing
by Joo-Young Roh, Sang-Hoon Choi and Ki-Woong Park
Sensors 2024, 24(19), 6244; https://doi.org/10.3390/s24196244 - 26 Sep 2024
Viewed by 1401
Abstract
In modern cloud environments, container orchestration tools are essential for effectively managing diverse workloads and services, and Kubernetes has become the de facto standard tool for automating the deployment, scaling, and operation of containerized applications. While Kubernetes plays an important role in optimizing [...] Read more.
In modern cloud environments, container orchestration tools are essential for effectively managing diverse workloads and services, and Kubernetes has become the de facto standard tool for automating the deployment, scaling, and operation of containerized applications. While Kubernetes plays an important role in optimizing and managing the deployment of diverse services and applications, its default scheduling approach, which is not optimized for all types of workloads, can often result in poor performance and wasted resources. This is particularly true in environments with complex interactions between services, such as microservice architectures. The traditional Kubernetes scheduler makes scheduling decisions based on CPU and memory usage, but the limitation of this arrangement is that it does not fully account for the performance and resource efficiency of the application. As a result, the communication latency between services increases, and the overall system performance suffers. Therefore, a more sophisticated and adaptive scheduling method is required. In this work, we propose an adaptive pod placement optimization technique using multi-tier inspection to address these issues. The proposed technique collects and analyzes multi-tier data to improve application performance and resource efficiency, which are overlooked by the default Kubernetes scheduler. It derives optimal placements based on the coupling and dependencies between pods, resulting in more efficient resource usage and better performance. To validate the performance of the proposed method, we configured a Kubernetes cluster in a virtualized environment and conducted experiments using a benchmark application with a microservice architecture. The experimental results show that the proposed method outperforms the existing Kubernetes scheduler, reducing the average response time by up to 11.5% and increasing the number of requests processed per second by up to 10.04%. This indicates that the proposed method minimizes the inter-pod communication delay and improves the system-wide resource utilization. This research aims to optimize application performance and increase resource efficiency in cloud-native environments, and the proposed technique can be applied to different cloud environments and workloads in the future to provide more generalized optimizations. This is expected to contribute to increasing the operational efficiency of cloud infrastructure and improving the quality of service. Full article
Show Figures

Figure 1

24 pages, 9191 KB  
Article
Clustering Method Comparison for Rural Occupant’s Behavior Based on Building Time-Series Energy Data
by Xiaodong Liu, Shuming Zhang, Xiaohan Wang, Rui Wu, Junqi Yang, Hong Zhang, Jianing Wu and Zhixin Li
Buildings 2024, 14(8), 2491; https://doi.org/10.3390/buildings14082491 - 12 Aug 2024
Cited by 1 | Viewed by 1959
Abstract
The purpose of this research is to compare clustering methods and pick up the optimal clustered approach for rural building energy consumption data. Research undertaken so far has mainly focused on solving specific issues when employing the clustered method. This paper concerns Yushan [...] Read more.
The purpose of this research is to compare clustering methods and pick up the optimal clustered approach for rural building energy consumption data. Research undertaken so far has mainly focused on solving specific issues when employing the clustered method. This paper concerns Yushan island resident’s time-series electricity usage data as a database for analysis. Fourteen algorithms in five categories were used for cluster analysis of the basic data sets. The result shows that Km_Euclidean and Km_shape present better clustering effects and fitting performance on continuous data than other algorithms, with a high accuracy rate of 67.05% and 65.09%. Km_DTW is applicable to intermittent curves instead of continuous data with a low precision rate of 35.29% for line curves. The final conclusion indicates that the K-means algorithm with Euclidean distance calculation and the k-shape algorithm are the two best clustering algorithms for building time-series energy curves. The deep learning algorithm can not cluster time-series-building electricity usage data under default parameters in high precision. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

26 pages, 1012 KB  
Article
On the Optimization of Kubernetes toward the Enhancement of Cloud Computing
by Subrota Kumar Mondal, Zhen Zheng and Yuning Cheng
Mathematics 2024, 12(16), 2476; https://doi.org/10.3390/math12162476 - 10 Aug 2024
Cited by 5 | Viewed by 4883
Abstract
With the vigorous development of big data and cloud computing, containers are becoming the main platform for running applications due to their flexible and lightweight features. Using a container cluster management system can more effectively manage multiocean containers on multiple machine nodes, and [...] Read more.
With the vigorous development of big data and cloud computing, containers are becoming the main platform for running applications due to their flexible and lightweight features. Using a container cluster management system can more effectively manage multiocean containers on multiple machine nodes, and Kubernetes has become a leader in container cluster management systems, with its powerful container orchestration capabilities. However, the current default Kubernetes components and settings have appeared to have a performance bottleneck and are not adaptable to complex usage environments. In particular, the issues are data distribution latency, inefficient cluster backup and restore leading to poor disaster recovery, poor rolling update leading to downtime, inefficiency in load balancing and handling requests, poor autoscaling and scheduling strategy leading to quality of service (QoS) violations and insufficient resource usage, and many others. Aiming at the insufficient performance of the default Kubernetes platform, this paper focuses on reducing the data distribution latency, improving the cluster backup and restore strategies toward better disaster recovery, optimizing zero-downtime rolling updates, incorporating better strategies for load balancing and handling requests, optimizing autoscaling, introducing better scheduling strategy, and so on. At the same time, the relevant experimental analysis is carried out. The experiment results show that compared with the default settings, the optimized Kubernetes platform can handle more than 2000 concurrent requests, reduce the CPU overhead by more than 1.5%, reduce the memory by more than 0.6%, reduce the average request time by an average of 7.6%, and reduce the number of request failures by at least 32.4%, achieving the expected effect. Full article
(This article belongs to the Special Issue Advanced Computational Intelligence in Cloud/Edge Computing)
Show Figures

Figure 1

22 pages, 5336 KB  
Article
Disrupted Brain Network Measures in Parkinson’s Disease Patients with Severe Hyposmia and Cognitively Normal Ability
by Karthik Siva, Palanisamy Ponnusamy and Malmathanraj Ramanathan
Brain Sci. 2024, 14(7), 685; https://doi.org/10.3390/brainsci14070685 - 8 Jul 2024
Cited by 4 | Viewed by 2326
Abstract
Neuroscience has revolved around brain structural changes, functional activity, and connectivity alteration in Parkinson’s Disease (PD); however, how the network topology organization becomes altered is still unclear, specifically in Parkinson’s patients with severe hyposmia. In this study, we have examined the functional network [...] Read more.
Neuroscience has revolved around brain structural changes, functional activity, and connectivity alteration in Parkinson’s Disease (PD); however, how the network topology organization becomes altered is still unclear, specifically in Parkinson’s patients with severe hyposmia. In this study, we have examined the functional network topological alteration in patients affected by Parkinson’s Disease with normal cognitive ability (ODN), Parkinson’s Disease with severe hyposmia (ODP), and healthy controls (HCs) using resting-state functional magnetic resonance imaging (rsfMRI) data. We have analyzed brain topological organization using popular graph measures such as network segregation (clustering coefficient, modularity), network integration (participation coefficient, path length), small-worldness, efficiency, centrality, and assortativity. Then, we used a feature ranking approach based on the diagonal adaptation of neighborhood component analysis, aiming to determine a graph measure that is sensitive enough to distinguish between these three different groups. We noted significantly lower segregation and local efficiency and small-worldness in ODP compared to ODN and HCs. On the contrary, we did not find differences in network integration in ODP compared to ODN and HCs, which indicates that the brain network becomes fragmented in ODP. At the brain network level, a progressive increase in the DMN (Default Mode Network) was observed from healthy controls to ODN to ODP, and a continuous decrease in the cingulo-opercular network was observed from healthy controls to ODN to ODP. Further, the feature ranking approach has shown that the whole-brain clustering coefficient and small-worldness are sensitive measures to classify ODP vs. ODN, as well as HCs. Looking at the brain regional network segregation, we have found that the cerebellum and limbic, fronto-parietal, and occipital lobes have higher ODP reductions than ODN and HCs. Our results suggest network topological measures, specifically whole-brain segregation and small-worldness decreases. At the network level, an increase in DMN and a decrease in the cingulo-opercular network could be used as biomarkers to characterize ODN and ODP. Full article
(This article belongs to the Special Issue New Approaches in the Exploration of Parkinson’s Disease)
Show Figures

Figure 1

17 pages, 1824 KB  
Systematic Review
The Complex Role Played by the Default Mode Network during Sexual Stimulation: A Cluster-Based fMRI Meta-Analysis
by Joana Pinto, Camila Comprido, Vanessa Moreira, Marica Tina Maccarone, Carlotta Cogoni, Ricardo Faustino, Duarte Pignatelli and Nicoletta Cera
Behav. Sci. 2024, 14(7), 570; https://doi.org/10.3390/bs14070570 - 5 Jul 2024
Cited by 1 | Viewed by 6062
Abstract
The default mode network (DMN) is a complex network that plays a significant and active role during naturalistic stimulation. Previous studies that have used naturalistic stimuli, such as real-life stories or silent or sonorous films, have found that the information processing involved a [...] Read more.
The default mode network (DMN) is a complex network that plays a significant and active role during naturalistic stimulation. Previous studies that have used naturalistic stimuli, such as real-life stories or silent or sonorous films, have found that the information processing involved a complex hierarchical set of brain regions, including the DMN nodes. The DMN is not involved in low-level features and is only associated with high-level content-related incoming information. The human sexual experience involves a complex set of processes related to both external context and inner processes. Since the DMN plays an active role in the integration of naturalistic stimuli and aesthetic perception with beliefs, thoughts, and episodic autobiographical memories, we aimed at quantifying the involvement of the nodes of the DMN during visual sexual stimulation. After a systematic search in the principal electronic databases, we selected 83 fMRI studies, and an ALE meta-analysis was calculated. We performed conjunction analyses to assess differences in the DMN related to stimulus modalities, sex differences, and sexual orientation. The results show that sexual stimulation alters the topography of the DMN and highlights the DMN’s active role in the integration of sexual stimuli with sexual schemas and beliefs. Full article
(This article belongs to the Special Issue Neural Correlates of Cognitive and Affective Processing)
Show Figures

Figure 1

12 pages, 433 KB  
Article
Uncertainty in GNN Learning Evaluations: A Comparison between Measures for Quantifying Randomness in GNN Community Detection
by William Leeney and Ryan McConville
Entropy 2024, 26(1), 78; https://doi.org/10.3390/e26010078 - 17 Jan 2024
Cited by 2 | Viewed by 2588
Abstract
(1) The enhanced capability of graph neural networks (GNNs) in unsupervised community detection of clustered nodes is attributed to their capacity to encode both the connectivity and feature information spaces of graphs. The identification of latent communities holds practical significance in various domains, [...] Read more.
(1) The enhanced capability of graph neural networks (GNNs) in unsupervised community detection of clustered nodes is attributed to their capacity to encode both the connectivity and feature information spaces of graphs. The identification of latent communities holds practical significance in various domains, from social networks to genomics. Current real-world performance benchmarks are perplexing due to the multitude of decisions influencing GNN evaluations for this task. (2) Three metrics are compared to assess the consistency of algorithm rankings in the presence of randomness. The consistency and quality of performance between the results under a hyperparameter optimisation with the default hyperparameters is evaluated. (3) The results compare hyperparameter optimisation with default hyperparameters, revealing a significant performance loss when neglecting hyperparameter investigation. A comparison of metrics indicates that ties in ranks can substantially alter the quantification of randomness. (4) Ensuring adherence to the same evaluation criteria may result in notable differences in the reported performance of methods for this task. The W randomness coefficient, based on the Wasserstein distance, is identified as providing the most robust assessment of randomness. Full article
Show Figures

Figure 1

Back to TopTop