Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,502)

Search Parameters:
Keywords = informed decision-making

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 10629 KB  
Article
Evaluating BIM and Mesh-Based 3D Modeling Approaches for Architectural Heritage: The Dosoftei House in Iași City, Romania
by Iosif Lavric, Valeria-Ersilia Oniga, Ana-Maria Loghin, Gabriela Covatariu and George-Cătălin Maleș
Appl. Sci. 2025, 15(17), 9409; https://doi.org/10.3390/app15179409 (registering DOI) - 27 Aug 2025
Abstract
Given its considerable cultural, historical, and economic value, built heritage requires the application of modern techniques for effective documentation and conservation. While multiple sensors are available for 3D modeling, laser scanning remains the most commonly employed due to its efficiency, precision, and ability [...] Read more.
Given its considerable cultural, historical, and economic value, built heritage requires the application of modern techniques for effective documentation and conservation. While multiple sensors are available for 3D modeling, laser scanning remains the most commonly employed due to its efficiency, precision, and ability to comprehensively capture the building’s geometry, surface textures, and structural details. This results in highly detailed 3D representations that are very important for accurate documentation, analysis, and conservation planning. This study investigates the complementary potential of different 3D modeling approaches for the digital representation of the Dosoftei House in Iasi, a monument of historical significance. For this purpose, an integrated point cloud was created based on a mobile hand-held laser scanner (HMLS), i.e., the FJD Trion P1 and a terrestrial laser scanner (TLS), i.e., the Maptek I-Site 8820 long-range laser scanner, the latter specifically used to capture the roof structures. Based on this dataset, a parametric model was created in Revit, supported by panoramic images, allowing for a structured representation useful in technical documentation and heritage management. In parallel, a mesh model was generated in CloudCompare using Poisson surface reconstruction. The comparison of the two methods highlights the high geometric accuracy of the mesh model and the Building Information Modeling (BIM) model’s capability to efficiently manage information linked to architectural elements. While the mesh provides detailed geometry, the BIM model excels in information organization and supports informed decision-making in conservation efforts. This research proposes leveraging the advantages of both methods within an integrated workflow, applicable on a larger scale in architectural heritage conservation projects. Full article
31 pages, 19249 KB  
Article
Temperature-Compensated Multi-Objective Framework for Core Loss Prediction and Optimization: Integrating Data-Driven Modeling and Evolutionary Strategies
by Yong Zeng, Da Gong, Yutong Zu and Qiong Zhang
Mathematics 2025, 13(17), 2758; https://doi.org/10.3390/math13172758 - 27 Aug 2025
Abstract
Magnetic components serve as critical energy conversion elements in power conversion systems, with their performance directly determining overall system efficiency and long-term operational reliability. The development of accurate core loss frameworks and multi-objective optimization strategies has emerged as a pivotal technical bottleneck in [...] Read more.
Magnetic components serve as critical energy conversion elements in power conversion systems, with their performance directly determining overall system efficiency and long-term operational reliability. The development of accurate core loss frameworks and multi-objective optimization strategies has emerged as a pivotal technical bottleneck in power electronics research. This study develops an integrated framework combining physics-informed modeling and multi-objective optimization. Key findings include the following: (1) a square-root temperature correction model (exponent = 0.5) derived via nonlinear least squares outperforms six alternatives for Steinmetz equation enhancement; (2) a hybrid Bi-LSTM-Bayes-ISE model achieves industry-leading predictive accuracy (R2 = 96.22%) through Bayesian hyperparameter optimization; and (3) coupled with NSGA-II, the framework optimizes core loss minimization and magnetic energy transmission, yielding Pareto-optimal solutions. Eight decision-making strategies are compared to refine trade-offs, while a crow search algorithm (CSA) improves NSGA-II’s initial population diversity. UFM, as the optimal decision strategy, achieves minimal core loss (659,555 W/m3) and maximal energy transmission (41,201.9 T·Hz) under 90 °C, 489.7 kHz, and 0.0841 T conditions. Experimental results validate the approach’s superiority in balancing performance and multi-objective efficiency under thermal variations. Full article
(This article belongs to the Special Issue Multi-Objective Optimization and Applications)
Show Figures

Figure 1

21 pages, 567 KB  
Article
Circular Pythagorean Fuzzy Deck of Cards Model for Optimal Deep Learning Architecture in Media Sentiment Interpretation
by Jiaqi Zheng, Song Wang and Zhaoqiang Wang
Symmetry 2025, 17(9), 1399; https://doi.org/10.3390/sym17091399 - 27 Aug 2025
Abstract
The rise of streaming services and online story-sharing has led to a vast amount of cinema and television content being viewed and reviewed daily by a worldwide audience. It is a unique challenge to grasp the nuanced insights of these reviews, particularly as [...] Read more.
The rise of streaming services and online story-sharing has led to a vast amount of cinema and television content being viewed and reviewed daily by a worldwide audience. It is a unique challenge to grasp the nuanced insights of these reviews, particularly as context, emotion, and specific components like acting, direction, and storyline intertwine extensively. The aim of this study is to address said complexity with a new hybrid Multi Criteria Decision-Making MCDM model that combines the Deck of Cards Method (DoCM) with the Circular Pythagorean Fuzzy Set (CPFS) framework, retaining the symmetry of information. The study is conducted on a simulated dataset to demonstrate the framework and outline the plan for approaching real-world press reviews. We postulate a more informed mechanism of assessing and choosing the most appropriate deep learning assembler, such as the transformer version, the hybrid Convolutional Neural Network CNN-RNN, and the attention-based framework of aspect-based sentiment mapping in film and television reviews. The model leverages both the cognitive ease of the DoCM and the expressive ability of the Pythagorean fuzzy set (PFS) in a circular relationship setting possessing symmetry, and can be applied to various decision-making situations other than the interpretation of media sentiments. This enables decision-makers to intuitively and flexibly compare alternatives based on many sentiment-relevant aspects, including classification accuracy, interpretability, computational efficiency, and generalization. The experiments are based on a hypothetical representation of media review datasets and test whether the model can combine human insight with algorithmic precision. Ultimately, this study presents a sound, structurally clear, and expandable framework of decision support to academicians and industry professionals involved in converging deep learning and opinion mining in entertainment analytics. Full article
(This article belongs to the Section Mathematics)
22 pages, 3467 KB  
Article
AlzheimerRAG: Multimodal Retrieval-Augmented Generation for Clinical Use Cases
by Aritra Kumar Lahiri and Qinmin Vivian Hu
Mach. Learn. Knowl. Extr. 2025, 7(3), 89; https://doi.org/10.3390/make7030089 (registering DOI) - 27 Aug 2025
Abstract
Recent advancements in generative AI have fostered the development of highly adept Large Language Models (LLMs) that integrate diverse data types to empower decision-making. Among these, multimodal retrieval-augmented generation (RAG) applications are promising because they combine the strengths of information retrieval and generative [...] Read more.
Recent advancements in generative AI have fostered the development of highly adept Large Language Models (LLMs) that integrate diverse data types to empower decision-making. Among these, multimodal retrieval-augmented generation (RAG) applications are promising because they combine the strengths of information retrieval and generative models, enhancing their utility across various domains, including clinical use cases. This paper introduces AlzheimerRAG, a multimodal RAG application for clinical use cases, primarily focusing on Alzheimer’s disease case studies from PubMed articles. This application incorporates cross-modal attention fusion techniques to integrate textual and visual data processing by efficiently indexing and accessing vast amounts of biomedical literature. Our experimental results, compared to benchmarks such as BioASQ and PubMedQA, yield improved performance in the retrieval and synthesis of domain-specific information. We also present a case study using our multimodal RAG in various Alzheimer’s clinical scenarios. We infer that AlzheimerRAG can generate responses with accuracy non-inferior to humans and with low rates of hallucination. Full article
15 pages, 877 KB  
Review
A Call for Conceptual Clarity: “Emotion” as an Umbrella Term Did Not Work—Let’s Narrow It Down
by Peter Walla, Angelika Wolman and Georg Northoff
Brain Sci. 2025, 15(9), 929; https://doi.org/10.3390/brainsci15090929 (registering DOI) - 27 Aug 2025
Abstract
To cut a long story short, the term “emotion” is predominantly employed as a comprehensive designation, encompassing phenomena such as feelings, affective processing, experiences, expressions, and, on occasion, cognitive processes. This has given rise to a plethora of schools of thought that diverge [...] Read more.
To cut a long story short, the term “emotion” is predominantly employed as a comprehensive designation, encompassing phenomena such as feelings, affective processing, experiences, expressions, and, on occasion, cognitive processes. This has given rise to a plethora of schools of thought that diverge in their inclusion of these phenomena, not to mention the discordance regarding what emotions belong to the so-called set of discrete emotions in the first place. This is a problem, because clear and operational definitions are paramount for ensuring the comparability of research findings across studies and also across different disciplines. In response to this disagreement, it is here proposed to simplify the definition of the term “emotion”, instead of using it as an umbrella term overarching an unclear set of multiple phenomena, which is exactly what left all of us uncertain about the question what an emotion actually is. From an etymological perspective, the simplest suggestion is to understand an emotion as behavior (from the Latin verb ‘emovere’, meaning to move out, and thus the noun ‘emotion’ meaning out-movement). This suggests that an emotion should not be understood as something felt, nor as a physiological reaction, or anything including cognition. Instead, emotions should be understood as behavioral outputs (not as information processing), with their connection to feelings being that they convey them. Consider fear, which should not be classified as an emotion, it should be understood as a feeling (fear is felt). The specific body posture, facial expression, and other behavioral manifestations resulting from muscle contractions should be classified as emotions with their purpose being to communicate the felt fear to conspecifics. The underlying causative basis for all that exists is affective processing (i.e., neural activity), and it provides evaluative information to support decision-making. The essence of this model is that if affective processing responds above a certain threshold, chemicals are released, which leads to a feeling (e.g., felt fear) if the respective organism is capable of conscious experience. Finally, the communication of these feelings to conspecifics is happening by emotion-behavior (i.e., emotions). In summary, affective processing guides behavior, and emotions communicate feelings. This perspective significantly simplifies the concept of an emotion and will prevent interchangeable use of emotion-related terms. Last but not least, according to the current model, emotions can also be produced voluntarily in order to feign a certain feeling, which is performed in various social settings. Applications of this model to various fields, including clinical psychology, show how beneficial it is. Full article
(This article belongs to the Special Issue Defining Emotion: A Collection of Current Models)
Show Figures

Figure 1

27 pages, 1684 KB  
Systematic Review
Exploring the Impact of Information and Communication Technology on Educational Administration: A Systematic Scoping Review
by Ting Liu, Yiming Taclis Luo, Patrick Cheong-Iao Pang and Ho Yin Kan
Educ. Sci. 2025, 15(9), 1114; https://doi.org/10.3390/educsci15091114 - 27 Aug 2025
Abstract
In the era of educational digital transformation, integrating information and communication technology (ICT) into school administration aligns with the goals of promoting personalized learning, equity, and teaching quality. This study examines how ICT reshapes management practices, addresses challenges, and achieves educational objectives. To [...] Read more.
In the era of educational digital transformation, integrating information and communication technology (ICT) into school administration aligns with the goals of promoting personalized learning, equity, and teaching quality. This study examines how ICT reshapes management practices, addresses challenges, and achieves educational objectives. To explore ICT’s impact on school administration (2009–2024), we conducted a systematic scoping review of four databases (Web of Science, Scopus, ScienceDirect, and IEEE Xplore) following the PRISMA-ScR guidelines. Retrieved studies were screened, analyzed, and synthesized to identify key trends and challenges. The results show that ICT significantly improves administrative efficiency. Automated systems streamline routine tasks, allowing administrators to allocate more time to strategic planning. It enables data-driven decision-making. By analyzing large datasets, ICT helps identify trends in student performance and resource utilization, facilitating accurate forecasting and better resource allocation. Moreover, ICT strengthens stakeholder communication. Online platforms enable instant interaction among teachers, students, and parents, increasing the transparency and responsiveness of school administration. However, there are challenges. Data privacy concerns can erode trust, as student and staff data collection and use may lead to breaches. Infrastructure deficiencies, such as unreliable internet and outdated equipment, impede implementation. The digital divide exacerbates inequality, with under-resourced schools struggling to utilize ICT fully. ICT is vital in educational administration. Its integration requires a strategic approach. This study offers insights for optimizing educational management via ICT and highlights the need for equitable technological advancement to create an inclusive, high-quality educational system. Full article
(This article belongs to the Special Issue ICTs in Managing Education Environments)
Show Figures

Figure 1

35 pages, 1263 KB  
Review
Blockchain for Security in Digital Twins
by Rahanatu Suleiman, Akshita Maradapu Vera Venkata Sai, Wei Yu and Chenyu Wang
Future Internet 2025, 17(9), 385; https://doi.org/10.3390/fi17090385 - 27 Aug 2025
Abstract
Digital Twins (DTs) have become essential tools for improving efficiency, security, and decision-making across various industries. DTs enable deeper insight and more informed decision-making through the creation of virtual replicas of physical entities. However, they face privacy and security risks due to their [...] Read more.
Digital Twins (DTs) have become essential tools for improving efficiency, security, and decision-making across various industries. DTs enable deeper insight and more informed decision-making through the creation of virtual replicas of physical entities. However, they face privacy and security risks due to their real-time connectivity, making them vulnerable to cyber attacks. These attacks can lead to data breaches, disrupt operations, and cause communication delays, undermining system reliability. To address these risks, integrating advanced security frameworks such as blockchain technology offers a promising solution. Blockchains’ decentralized, tamper-resistant architecture enhances data integrity, transparency, and trust in DT environments. This paper examines security vulnerabilities associated with DTs and explores blockchain-based solutions to mitigate these challenges. A case study is presented involving how blockchain-based DTs can facilitate secure, decentralized data sharing between autonomous connected vehicles and traffic infrastructure. This integration supports real-time vehicle tracking, collision avoidance, and optimized traffic flow through secure data exchange between the DTs of vehicles and traffic lights. The study also reviews performance metrics for evaluating blockchain and DT systems and outlines future research directions. By highlighting the collaboration between blockchain and DTs, the paper proposes a pathway towards building more resilient, secure, and intelligent digital ecosystems for critical applications. Full article
Show Figures

Figure 1

38 pages, 4944 KB  
Article
Integrated Survey Classification and Trend Analysis via LLMs: An Ensemble Approach for Robust Literature Synthesis
by Eleonora Bernasconi, Domenico Redavid and Stefano Ferilli
Electronics 2025, 14(17), 3404; https://doi.org/10.3390/electronics14173404 - 27 Aug 2025
Abstract
This study proposes a novel, scalable framework for the automated classification and synthesis of survey literature by integrating state-of-the-art Large Language Models (LLMs) with robust ensemble voting techniques. The framework consolidates predictions from three independent models—GPT-4, LLaMA 3.3, and Claude 3—to generate consensus-based [...] Read more.
This study proposes a novel, scalable framework for the automated classification and synthesis of survey literature by integrating state-of-the-art Large Language Models (LLMs) with robust ensemble voting techniques. The framework consolidates predictions from three independent models—GPT-4, LLaMA 3.3, and Claude 3—to generate consensus-based classifications, thereby enhancing reliability and mitigating individual model biases. We demonstrate the generalizability of our approach through comprehensive evaluation on two distinct domains: Question Answering (QA) systems and Computer Vision (CV) survey literature, using a dataset of 1154 real papers extracted from arXiv. Comprehensive visual evaluation tools, including distribution charts, heatmaps, confusion matrices, and statistical validation metrics, are employed to rigorously assess model performance and inter-model agreement. The framework incorporates advanced statistical measures, including k-fold cross-validation, Fleiss’ kappa for inter-rater reliability, and chi-square tests for independence to validate classification robustness. Extensive experimental evaluations demonstrate that this ensemble approach achieves superior performance compared to individual models, with accuracy improvements of 10.0% over the best single model on QA literature and 10.9% on CV literature. Furthermore, comprehensive cost–benefit analysis reveals that our automated approach reduces manual literature synthesis time by 95% while maintaining high classification accuracy (F1-score: 0.89 for QA, 0.87 for CV), making it a practical solution for large-scale literature analysis. The methodology effectively uncovers emerging research trends and persistent challenges across domains, providing researchers with powerful tools for continuous literature monitoring and informed decision-making in rapidly evolving scientific fields. Full article
(This article belongs to the Special Issue Knowledge Engineering and Data Mining, 3rd Edition)
Show Figures

Figure 1

24 pages, 2974 KB  
Article
Ecological Resilience and Sustainable Development: Dynamic Assessment and Evolution Mechanisms of Landscape Patterns and Ecotourism Suitability in the Yangtze River Delta Region
by Junjie Li, Xiaodong Liu, Zhiyu Feng, Jinjin Liu, Yibo Wang, Mengjie Zhang and Xiangbin Peng
Sustainability 2025, 17(17), 7706; https://doi.org/10.3390/su17177706 - 27 Aug 2025
Abstract
Ecotourism, as a resilient and sustainable form of tourism, plays an increasingly vital role in regional economic growth and ecological conservation, particularly in the face of challenges such as climate change and rapid urbanization. This study employs spatial-temporal analysis tools including GIS, Fragstats, [...] Read more.
Ecotourism, as a resilient and sustainable form of tourism, plays an increasingly vital role in regional economic growth and ecological conservation, particularly in the face of challenges such as climate change and rapid urbanization. This study employs spatial-temporal analysis tools including GIS, Fragstats, and GeoDa to examine the dynamic evolution of ecotourism suitability levels (ESL) and landscape patterns (LP) in the Yangtze River Delta (YRD) from 2002 to 2022. By incorporating spatial autocorrelation analysis, the relationship between ESL and LP is investigated to assess the adaptive capacity of the regional ecotourism system. The results reveal the following: (1) Overall Trends: ESL in the YRD has generally increased over the past two decades, with expansions observed in both high and very low suitability areas, while areas of low suitability have contracted. (2) Spatial Patterns: Core cities such as Shanghai, Hangzhou, Nanjing, and Hefei exhibit high ESL; however, these areas also face intensified landscape fragmentation and decreased ecological connectivity. (3) Landscape Patterns: The region has experienced increasing landscape fragmentation and diversity, particularly in economically advanced zones, posing significant challenges to ecological resilience. (4) Spatial Clustering: Notable spatial clustering of ESL and LP indices is identified in highly urbanized areas, underscoring the necessity for adaptive landscape planning and flexible policy frameworks. This study provides empirical evidence and strategic recommendations to enhance the resilience and sustainability of ecotourism in rapidly urbanizing regions, supporting adaptive responses to crises and informed long-term decision-making. Full article
Show Figures

Figure 1

16 pages, 306 KB  
Article
Adaptive Cross-Scale Graph Fusion with Spatio-Temporal Attention for Traffic Prediction
by Zihao Zhao, Xingzheng Zhu and Ziyun Ye
Electronics 2025, 14(17), 3399; https://doi.org/10.3390/electronics14173399 - 26 Aug 2025
Abstract
Traffic flow prediction is a critical component of intelligent transportation systems, playing a vital role in alleviating congestion, improving road resource utilization, and supporting traffic management decisions. Although deep learning methods have made remarkable progress in this field in recent years, current studies [...] Read more.
Traffic flow prediction is a critical component of intelligent transportation systems, playing a vital role in alleviating congestion, improving road resource utilization, and supporting traffic management decisions. Although deep learning methods have made remarkable progress in this field in recent years, current studies still face challenges in modeling complex spatio-temporal dependencies, adapting to anomalous events, and generalizing to large-scale real-world scenarios. To address these issues, this paper proposes a novel traffic flow prediction model. The proposed approach simultaneously leverages temporal and frequency domain information and introduces adaptive graph convolutional layers to replace traditional graph convolutions, enabling dynamic capture of traffic network structural features. Furthermore, we design a frequency–temporal multi-head attention mechanism for effective multi-scale spatio-temporal feature extraction and develop a cross-multi-scale graph fusion strategy to enhance predictive performance. Extensive experiments on real-world datasets, PeMS and Beijing, demonstrate that our method significantly outperforms state-of-the-art (SOTA) baselines. For example, on the PeMS20 dataset, our model achieves a 53.6% lower MAE, a 12.3% lower NRMSE, and a 3.2% lower MAPE than the best existing method (STFGNN). Moreover, the proposed model achieves competitive computational efficiency and inference speed, making it well-suited for practical deployment. Full article
(This article belongs to the Special Issue Graph-Based Learning Methods in Intelligent Transportation Systems)
Show Figures

Figure 1

24 pages, 1651 KB  
Article
Attentive Neural Processes for Few-Shot Learning Anomaly-Based Vessel Localization Using Magnetic Sensor Data
by Luis Fernando Fernández-Salvador, Borja Vilallonga Tejela, Alejandro Almodóvar, Juan Parras and Santiago Zazo
J. Mar. Sci. Eng. 2025, 13(9), 1627; https://doi.org/10.3390/jmse13091627 - 26 Aug 2025
Abstract
Underwater vessel localization using passive magnetic anomaly sensing is a challenging problem due to the variability in vessel magnetic signatures and operational conditions. Data-based approaches may fail to generalize even to slightly different conditions. Thus, we propose an Attentive Neural Process (ANP) approach, [...] Read more.
Underwater vessel localization using passive magnetic anomaly sensing is a challenging problem due to the variability in vessel magnetic signatures and operational conditions. Data-based approaches may fail to generalize even to slightly different conditions. Thus, we propose an Attentive Neural Process (ANP) approach, in order to take advantage of its few-shot capabilities to generalize, for robust localization of underwater vessels based on magnetic anomaly measurements. Our ANP models the mapping from multi-sensor magnetic readings to position as a stochastic function: it cross-attends to a variable-size set of context points and fuses these with a global latent code that captures trajectory-level factors. The decoder outputs a Gaussian over coordinates, providing both point estimates and well-calibrated predictive variance. We validate our approach using a comprehensive dataset of magnetic disturbance fields, covering 64 distinct vessel configurations (combinations of varying hull sizes, submersion depths (water-column height over a seabed array), and total numbers of available sensors). Six magnetometer sensors in a fixed circular arrangement record the magnetic field perturbations as a vessel traverses sinusoidal trajectories. We compare the ANP against baseline multilayer perceptron (MLP) models: (1) base MLPs trained separately on each vessel configuration, and (2) a domain-randomized search (DRS) MLP trained on the aggregate of all configurations to evaluate generalization across domains. The results demonstrate that the ANP achieves superior generalization to new vessel conditions, matching the accuracy of configuration-specific MLPs while providing well-calibrated uncertainty quantification. This uncertainty-aware prediction capability is crucial for real-world deployments, as it can inform adaptive sensing and decision-making. Across various in-distribution scenarios, the ANP halves the mean absolute error versus a domain-randomized MLP (0.43 m vs. 0.84 m). The model is even able to generalize to out-of-distribution data, which means that our approach has the potential to facilitate transferability from offline training to real-world conditions. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

30 pages, 23278 KB  
Article
Digital Twin-Assisted Urban Resilience: A Data-Driven Framework for Sustainable Regeneration in Paranoá, Brasilia
by Tao Dong and Massimo Tadi
Urban Sci. 2025, 9(9), 333; https://doi.org/10.3390/urbansci9090333 - 26 Aug 2025
Abstract
Rapid urbanization has intensified the systemic inequities of resources and infrastructure distribution in informal settlements, particularly in the Global South. Digital Twin Modeling (DTM), as an effective data-driven representation, enables real-time analysis, scenario simulation, and design optimization, making it a promising tool to [...] Read more.
Rapid urbanization has intensified the systemic inequities of resources and infrastructure distribution in informal settlements, particularly in the Global South. Digital Twin Modeling (DTM), as an effective data-driven representation, enables real-time analysis, scenario simulation, and design optimization, making it a promising tool to support urban resilience. This study introduces the Integrated Modification Methodology (IMM), developed by Politecnico di Milano (Italy), to explore how DTM can be systematically structured and transformed into an active instrument, linking theories with practical application. Focusing on Paranoá (Brasília), a case study developed under the NBSouth project in collaboration with the Politecnico di Milano and the University of Brasília, this research integrates advanced spatial mapping with comprehensive key performance indicators (KPIs) analysis to address developmental and environmental challenges during the regeneration process. Key metrics—Green Space Diversity, Ecosystem Service Proximity, and Green Space Continuity—were analyzed by a Geographic Information System (GIS) platform on 30 m by 30 m sampling grids. Additional KPIs across urban structural, environmental, and mobility layers were calculated to support the decision-making process for strategic mapping. This study contributes to theoretical advancements in DTM and broader discourse on urban regeneration under climate stress, offering a systemic and practical approach for multi-dimensional digitalization of urban structure and performance, supporting a more adaptive, data-based, and transferable planning process in the Global South. Full article
(This article belongs to the Topic Spatial Decision Support Systems for Urban Sustainability)
Show Figures

Figure 1

28 pages, 3631 KB  
Article
Integrated Risk Assessment in Construction Contracts: Comparative Evaluation of Risk Matrix and Monte Carlo Simulation on a High-Rise Office Building Project
by Anna Starczyk-Kołbyk and Izabela Jędras
Appl. Sci. 2025, 15(17), 9371; https://doi.org/10.3390/app15179371 - 26 Aug 2025
Abstract
This study investigates the application of two complementary risk analysis methods—risk matrix and Monte Carlo simulation—in the context of a large-scale office building construction project. The paper explores the theoretical and practical aspects of construction risk, focusing on how probabilistic and qualitative tools [...] Read more.
This study investigates the application of two complementary risk analysis methods—risk matrix and Monte Carlo simulation—in the context of a large-scale office building construction project. The paper explores the theoretical and practical aspects of construction risk, focusing on how probabilistic and qualitative tools can support informed decision-making. Twelve key risks, including both threats and opportunities, were identified and quantified using expert judgment and historical data. The risk matrix provided an initial prioritization of risk severity and likelihood, while Monte Carlo simulations allowed for the modeling of uncertainty in cost outcomes across a probabilistic spectrum. The results indicate a high level of consistency between the methods, with both identifying value engineering as a dominant opportunity and network documentation errors as critical threats. Monte Carlo simulations further revealed that under proper risk management, the project is likely to avoid additional cost overruns with 60% certainty. This integrated approach provides practical insights for contractors and project managers seeking to enhance the robustness of risk assessment in complex construction environments. Full article
Show Figures

Figure 1

15 pages, 884 KB  
Article
Enhancing Sustainability Through Quality Controlled Energy Data: The Horizon 2020 EnerMaps Project
by Simon Pezzutto, Dario Bottino-Leone and Eric John Wilczynski
Sustainability 2025, 17(17), 7684; https://doi.org/10.3390/su17177684 - 26 Aug 2025
Abstract
The Horizon 2020 EnerMaps project addresses the fragmentation and variable reliability of European energy datasets by developing a reproducible quality control (QC) framework aligned with FAIR principles. This research supports sustainability goals by enabling better decision making in energy management, resource optimization, and [...] Read more.
The Horizon 2020 EnerMaps project addresses the fragmentation and variable reliability of European energy datasets by developing a reproducible quality control (QC) framework aligned with FAIR principles. This research supports sustainability goals by enabling better decision making in energy management, resource optimization, and sustainable policy development. This study applies this framework to an initial inventory of 50 spatially referenced energy datasets, classifying them into three assessment levels and subjecting each level to progressively deeper checks: expert consultation, metadata verification against a customized “DataCite/schema.org” schema, documentation review, completeness analysis, consistency testing via simple linear regressions, comparative descriptive statistics, and community feedback preparation. The results show that all datasets are findable and accessible, yet critical FAIR attributes remain weak: 68% lack explicit licenses and 96% omit terms-of-use statements; methodology descriptions are present in 77% of cases, while quantitative accuracy information appears in only 43%. Completeness screening reveals that more than half of the datasets exhibit over 20% missing values in one or more key dimensions. Consistency analyses nevertheless indicate statistically significant correlations (p < 0.05) for the majority of paired comparisons, supporting basic reliability. By improving the FAIRness (Findable, Accessible, Interoperable, Reusable) of energy data, this study directly contributes to more effective sustainability assessments and interventions. The proposed QC workflow therefore provides a scalable route to improve the transparency, comparability, and reusability of heterogeneous energy data, and its adoption could accelerate open energy modelling and policy analysis across Europe. Full article
Show Figures

Figure 1

20 pages, 592 KB  
Review
The Temporal Evolution of Large Language Model Performance: A Comparative Analysis of Past and Current Outputs in Scientific and Medical Research
by Ishith Seth, Gianluca Marcaccini, Bryan Lim, Jennifer Novo, Stephen Bacchi, Roberto Cuomo, Richard J. Ross and Warren M. Rozen
Informatics 2025, 12(3), 86; https://doi.org/10.3390/informatics12030086 - 26 Aug 2025
Abstract
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model [...] Read more.
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model outputs (GPT-3.5 and GPT-4.0) with ChatGPT-4.5 across three domains: aesthetic surgery counseling, an academic discussion base of thumb arthritis, and a systematic literature review. Methods: We replicated the methodologies of three previously published studies using identical prompts in ChatGPT-4.5. Each output was assessed against its predecessor using a nine-domain Likert-based rubric measuring factual accuracy, completeness, reference quality, clarity, clinical insight, scientific reasoning, bias avoidance, utility, and interactivity. Expert reviewers in plastic and reconstructive surgery independently scored and compared model outputs across versions. Results: ChatGPT-4.5 outperformed earlier versions across all domains. Reference quality improved most significantly (a score increase of +4.5), followed by factual accuracy (+2.5), scientific reasoning (+2.5), and utility (+2.5). In aesthetic surgery counseling, GPT-3.5 produced generic responses lacking clinical detail, whereas ChatGPT-4.5 offered tailored, structured, and psychologically sensitive advice. In academic writing, ChatGPT-4.5 eliminated reference hallucination, correctly applied evidence hierarchies, and demonstrated advanced reasoning. In the literature review, recall remained suboptimal, but precision, citation accuracy, and contextual depth improved substantially. Conclusion: ChatGPT-4.5 represents a major step forward in LLM capability, particularly in generating trustworthy academic and clinical content. While not yet suitable as a standalone decision-making tool, its outputs now support research planning and early-stage manuscript preparation. Persistent limitations include information recall and interpretive flexibility. Continued validation is essential to ensure ethical, effective use in scientific workflows. Full article
Show Figures

Figure 1

Back to TopTop