Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,461)

Search Parameters:
Keywords = softwarized networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
6268 KB  
Article
Robustness Evaluation and Enhancement Strategy of Cloud Manufacturing Service System Based on Hybrid Modeling
by Xin Zheng, Beiyu Yi and Hui Min
Mathematics 2025, 13(18), 2905; https://doi.org/10.3390/math13182905 (registering DOI) - 9 Sep 2025
Abstract
In dynamic and open cloud service processes, particularly in distributed networked manufacturing environments, the complex and volatile manufacturing landscape introduces numerous uncertainties and disturbances. This paper addresses the common issue of cloud resource connection interruptions by proposing a path substitution strategy based on [...] Read more.
In dynamic and open cloud service processes, particularly in distributed networked manufacturing environments, the complex and volatile manufacturing landscape introduces numerous uncertainties and disturbances. This paper addresses the common issue of cloud resource connection interruptions by proposing a path substitution strategy based on alternative service routes. By integrating agent-based simulation and complex network methodologies, a simulation model for evaluating the robustness of cloud manufacturing service systems is developed, enabling dynamic simulation and quantitative decision-making for the proposed robustness enhancement strategies. First, a hybrid modeling approach for cloud manufacturing service systems is proposed to meet the needs of robustness analysis. The specific construction of the hybrid simulation model is achieved using the AnyLogic 8.7.4 simulation software and Java-based secondary development techniques. Second, a complex network model focusing on cloud manufacturing resource entities is further constructed based on the simulation model. By combining the two models, two-dimensional robustness evaluation indicators—comprising performance robustness and structural robustness—are established. Then, four types of edge attack strategies are designed based on the initial topology and recomputed topology. To ensure system operability after edge failures, a path substitution strategy is proposed by introducing redundant routes. Finally, a case study of a cloud manufacturing project is conducted. The results show the following: (1) The proposed robustness evaluation model fully captures complex disturbance scenarios in cloud manufacturing, and the designed simulation experiments support the evaluation and comparative analysis of robustness improvement strategies from both performance and structural robustness dimensions. (2) The path substitution strategy significantly enhances the robustness of cloud manufacturing services, though its effects on performance and structural robustness vary across different disturbance scenarios. Full article
(This article belongs to the Special Issue Interdisciplinary Modeling and Analysis of Complex Systems)
Show Figures

Figure 1

21 pages, 5562 KB  
Article
LSNet: Adaptive Latent Space Networks for Vulnerability Severity Assessment
by Yizhou Wang, Jin Zhang and Mingfeng Huang
Information 2025, 16(9), 779; https://doi.org/10.3390/info16090779 (registering DOI) - 8 Sep 2025
Abstract
Due to the increasing harmfulness of software vulnerabilities, it is increasingly suggested to propose more efficient vulnerability assessment methods. However, existing methods mainly rely on manual updates and inefficient rule matching, and they struggle to capture potential correlations between vulnerabilities, thus resulting in [...] Read more.
Due to the increasing harmfulness of software vulnerabilities, it is increasingly suggested to propose more efficient vulnerability assessment methods. However, existing methods mainly rely on manual updates and inefficient rule matching, and they struggle to capture potential correlations between vulnerabilities, thus resulting in issues such as strong subjectivity and low efficiency. To this end, a vulnerability severity assessment method named Latent Space Networks (LSNet) is proposed in this paper. Specifically, based on a clustering analysis in Common Vulnerability Scoring System (CVSS) metrics, we first exploit relations for CVSS metrics prediction and propose an adaptive transformer to extract vulnerability from both global semantic and local latent space features. Then, we utilize bidirectional encoding and token masking techniques to enhance the model’s understanding of vulnerability–location relationships, and combine the Transformer method with convolution to significantly improve the model’s ability to identify vulnerable text. Finally, extensive experiments conducted on the open vulnerability dataset and the CCF OSC2024 dataset demonstrate that LSNet is capable of extracting potential correlation features. Compared with baseline methods, including SVM, Transformer, TextCNN, BERT, DeBERTa, ALBERT, and RoBERTa, it exhibits higher accuracy and efficiency. Full article
(This article belongs to the Topic Addressing Security Issues Related to Modern Software)
Show Figures

Figure 1

16 pages, 2417 KB  
Article
EGFR Amplification in Diffuse Glioma and Its Correlation to Language Tract Integrity
by Alim Emre Basaran, Alonso Barrantes-Freer, Max Braune, Gordian Prasse, Paul-Philipp Jacobs, Johannes Wach, Martin Vychopen, Erdem Güresir and Tim Wende
Diagnostics 2025, 15(17), 2266; https://doi.org/10.3390/diagnostics15172266 - 8 Sep 2025
Abstract
Background: The epidermal growth factor receptor (EGFR) is an important factor in the behavior of diffuse glioma, serving as a potential biomarker for tumor aggressiveness and a therapeutic target. Diffusion tensor imaging (DTI) provides insights into the microstructural integrity of brain tissues, [...] Read more.
Background: The epidermal growth factor receptor (EGFR) is an important factor in the behavior of diffuse glioma, serving as a potential biomarker for tumor aggressiveness and a therapeutic target. Diffusion tensor imaging (DTI) provides insights into the microstructural integrity of brain tissues, allowing for detailed visualization of tumor-induced changes in white matter tracts. This imaging technique can complement molecular pathology by correlating imaging findings with molecular markers and genetic profiles, potentially enhancing the understanding of tumor behavior and aiding in the formulation of targeted therapeutic strategies. The present study aimed to investigate the molecular properties of diffuse glioma based on DTI sequences. Methods: A total of 27 patients with diffuse glioma (in accordance with the WHO 2021 classification) were investigated using preoperative DTI sequences. The study was conducted using the tractography software DSI Studio (Hou versions 2025.04.16). Following the preprocessing of the raw data, volumes of the arcuate fasciculus (AF), frontal aslant tract (FAT), inferior fronto-occipital fasciculus (IFOF), superior longitudinal fasciculus (SLF), and uncinate fasciculus (UF) were reconstructed, and fractional anisotropy (FA) was derived. Molecular pathological examination was conducted to assess the presence of EGFR amplifications. Results: The mean age of patients was 56 ± 13 years, with 33% females. EGFR amplification was observed in 8/27 (29.6%) of cases. Following correction for multiple comparisons, FA in the left AF (p = 0.025) and in the left FAT (p = 0.020) was found to be significantly lowered in EGFR amplified glioma. In the right language network, however, no statistically significant changes were observed. Conclusions: EGFR amplification may be associated with lower white matter integrity of left hemispheric language tracts, possibly impairing neurological function and impacting surgical outcomes. The underlying molecular and cellular mechanisms driving this association require further investigation. Full article
(This article belongs to the Special Issue Advanced Brain Tumor Imaging)
Show Figures

Figure 1

27 pages, 2027 KB  
Article
Comparative Analysis of SDN and Blockchain Integration in P2P Streaming Networks for Secure and Reliable Communication
by Aisha Mohmmed Alshiky, Maher Ali Khemakhem, Fathy Eassa and Ahmed Alzahrani
Electronics 2025, 14(17), 3558; https://doi.org/10.3390/electronics14173558 - 7 Sep 2025
Abstract
Rapid advancements in peer-to-peer (P2P) streaming technologies have significantly impacted digital communication, enabling scalable, decentralized, and real-time content distribution. Despite these advancements, challenges persist, including dynamic topology management, high latency, security vulnerabilities, and unfair resource sharing (e.g., free rider). While software-defined networking (SDN) [...] Read more.
Rapid advancements in peer-to-peer (P2P) streaming technologies have significantly impacted digital communication, enabling scalable, decentralized, and real-time content distribution. Despite these advancements, challenges persist, including dynamic topology management, high latency, security vulnerabilities, and unfair resource sharing (e.g., free rider). While software-defined networking (SDN) and blockchain individually address aspects of these limitations, their combined potential for comprehensive optimization remains underexplored. This study proposes a distributed SDN (DSDN) architecture enhanced with blockchain support to provide secure, scalable, and reliable P2P video streaming. We identified research gaps through critical analysis of the literature. We systematically compared traditional P2P, SDN-enhanced, and hybrid architectures across six performance metrics: latency, throughput, packet loss, authentication accuracy, packet delivery ratio, and control overhead. Simulations with 200 peers demonstrate that the proposed hybrid SDN–blockchain framework achieves a latency of 140 ms, a throughput of 340 Mbps, an authentication accuracy of 98%, a packet delivery ratio of 97.8%, a packet loss ratio of 2.2%, and a control overhead of 9.3%, outperforming state-of-the-art solutions such as NodeMaps, the reinforcement learning-based routing framework (RL-RF), and content delivery networks-P2P networks (CDN-P2P). This work establishes a scalable and attack-resilient foundation for next-generation P2P streaming. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 2070 KB  
Article
Effect of Water Regeneration and Integration on Technical Indicators of PVC Manufacturing Using Process System Engineering
by Eduardo Andrés Aguilar-Vásquez, Segundo Rojas-Flores and Ángel Darío González-Delgado
Polymers 2025, 17(17), 2418; https://doi.org/10.3390/polym17172418 - 6 Sep 2025
Viewed by 172
Abstract
The suspension polymerization process of polyvinyl chloride (PVC) production involves significant freshwater consumption alongside substantial wastewater emissions. Mass integration strategies have been used to address this problem, but only through direct recycling approaches. Therefore, in this study, a regeneration approach was applied to [...] Read more.
The suspension polymerization process of polyvinyl chloride (PVC) production involves significant freshwater consumption alongside substantial wastewater emissions. Mass integration strategies have been used to address this problem, but only through direct recycling approaches. Therefore, in this study, a regeneration approach was applied to integrate a PVC suspension process to improve water management. The reuse network was evaluated through a water–energy–product (WEP) technical analysis after being simulated in AspenPlus software v.14. The mass integration allowed for a 61% reduction in freshwater consumption and an 83% reduction in wastewater. However, 258.6 t/day of residual wastewater still remained after regeneration. The WEP analysis found that the process was efficient in handling raw materials and process products due to the high yield and recovery of unreacted materials. Similarly, the integration significantly benefitted the process performance as water usage indicators improved substantially, with freshwater consumption of 83%, a wastewater production rate of 63%, and freshwater water costs of $267,322 per year (from $694,080 before integration). In terms of energy performance, the results were regular. The processes showed high energy consumption (below 50%), with indicators related to the use of natural gas, electricity, and energy costs being affected by the regeneration. However, the limited heat integration provided minor energy savings (11 MJ/h). Finally, this work gives an interesting insight into water conservation and the circular economy, since the study used the latest systems in regeneration of effluents for plastic plants (emerging technologies), showcasing important benefits and trade-offs of these strategies. Full article
(This article belongs to the Special Issue Biodegradable and Functional Polymers for Food Packaging)
Show Figures

Figure 1

7119 KB  
Proceeding Paper
Identification and Optimization of Components of University Campus Space
by Yue Sun and Yifei Ouyang
Eng. Proc. 2025, 108(1), 33; https://doi.org/10.3390/engproc2025108033 (registering DOI) - 5 Sep 2025
Abstract
Amid expanding higher education and enhancing spatial quality, modern university campuses face challenges including inefficient space utilization and a disconnect from human-centered design. We developed a coupled model that integrates the analytic hierarchy process (AHP) with space syntax theory to identify and address [...] Read more.
Amid expanding higher education and enhancing spatial quality, modern university campuses face challenges including inefficient space utilization and a disconnect from human-centered design. We developed a coupled model that integrates the analytic hierarchy process (AHP) with space syntax theory to identify and address functional fragmentation, limited accessibility, and diminished spatial vitality. The Delphi method was employed to determine weights on visual and traffic influence factors. Through spatial quantitative analysis using Depthmap software, we estimated spatial-efficiency discrepancies across 11 component types, including school gates, teaching buildings, and libraries. A case study was conducted at a university located in the hilly terrain of Conghua District, Guangzhou, China which revealed significant contradictions between subjective evaluations and objective data at components, such as the administrative building and gymnasium. These contradictions led to poor visual permeability, excessive path redundancy, and imbalanced functional layouts. Based on the results of this study, targeted optimization strategies were proposed, including permeable interface designs, path network reconfiguration, and the implementation of dynamic functional modules. These interventions were tailored to accommodate the humid subtropical climate, balancing shading, ventilation, and visual transparency. In this study, methodological support for the renovation of existing campus infrastructure was provided as theoretical and technical references for space renewal in tropical and subtropical academic environments and the enhancement of the quality and resilience of campus spaces. The results also broadened the application of interdisciplinary methods in university planning. Full article
Show Figures

Figure 1

40 pages, 6998 KB  
Article
Information-Cognitive Concept of Predicting Method for HCI Objects’ Perception Subjectivization Results Based on Impact Factors Analysis with Usage of Multilayer Perceptron ANN
by Andrii Pukach, Vasyl Teslyuk, Nataliia Lysa, Liubomyr Sikora and Bohdana Fedyna
Appl. Sci. 2025, 15(17), 9763; https://doi.org/10.3390/app15179763 - 5 Sep 2025
Viewed by 187
Abstract
An information-cognitive concept of a predicting method for obtaining specialized human–computer interaction (HCI) objects’ perception subjectivization results, based on impact factors analysis, with the use of multilayer perceptron (MP) artificial neural networks (ANNs), has been developed. The main purpose of the developed method [...] Read more.
An information-cognitive concept of a predicting method for obtaining specialized human–computer interaction (HCI) objects’ perception subjectivization results, based on impact factors analysis, with the use of multilayer perceptron (MP) artificial neural networks (ANNs), has been developed. The main purpose of the developed method is to increase the level of intellectualization and automation of research into relevant processes of HCI objects’ perception subjectivization, especially in the context of software products’ comprehensive support processes. The method is based on the developed conceptual models and the developed mathematical model, as well as a specialized developed algorithm. Results prediction is carried out on the basis of a preliminary in-depth analysis of a set of unique direct chains (UDCs) of neurons of the relevant encapsulated MP ANN, built on the basis of researching the results of isolated influences of each of the previously declared impact factors and further comparing the present direct chain (of each separate investigated modeling case) with UDCs from the aforementioned sets. As an example of practical approbation of the developed method, the appropriate practically applied problem of identifying a member of the support team, whose multifactor portrait is as close as possible to the corresponding multifactor portrait of a given client’s user, has been resolved. Full article
Show Figures

Figure 1

35 pages, 8381 KB  
Article
Bibliometric Analysis of Hospital Design: Knowledge Mapping Evolution and Research Trends
by Jingwen Liu and Youngho Yeo
Buildings 2025, 15(17), 3196; https://doi.org/10.3390/buildings15173196 - 4 Sep 2025
Viewed by 292
Abstract
Hospital design plays a pivotal role in improving patient outcomes, enhancing clinical efficiency, and strengthening infection control. Since the outbreak of COVID-19, research in this field has expanded significantly, showing a marked trend toward interdisciplinary integration. In this study, bibliometric analysis was conducted [...] Read more.
Hospital design plays a pivotal role in improving patient outcomes, enhancing clinical efficiency, and strengthening infection control. Since the outbreak of COVID-19, research in this field has expanded significantly, showing a marked trend toward interdisciplinary integration. In this study, bibliometric analysis was conducted using CiteSpace (version 6.2.R3) as the primary tool, with Excel and Tableau (version 2024.3) as supplementary software. A total of 877 documents on hospital design published between 1932 and 2025 were retrieved from the Web of Science Core Collection and analyzed from multiple perspectives. The analysis examined publication trends, collaborative networks, co-citation structures, disciplinary evolution, and keyword dynamics. The results indicate that the field has entered a phase of rapid development since 2019. Global collaboration networks are becoming increasingly multipolar; yet, institutional and author-level connections remain decentralized, with relatively low overall density. Evidence-based design (EBD) continues to serve as the theoretical foundation of the field, while emerging themes such as healing environments, biophilic design, and patient-centered spatial strategies have become major research hotspots. Increasingly, the field reflects deeper integration across disciplines, including architecture, medicine, nursing, and environmental science. This study provides a clearer picture of the developmental trajectory, knowledge base, and future directions of hospital design research, offering systematic insights and theoretical guidance for both scholars and practitioners. Full article
(This article belongs to the Special Issue Data Analytics Applications for Architecture and Construction)
Show Figures

Figure 1

26 pages, 3073 KB  
Article
From Detection to Decision: Transforming Cybersecurity with Deep Learning and Visual Analytics
by Saurabh Chavan and George Pappas
AI 2025, 6(9), 214; https://doi.org/10.3390/ai6090214 - 4 Sep 2025
Viewed by 230
Abstract
Objectives: The persistent evolution of software vulnerabilities—spanning novel zero-day exploits to logic-level flaws—continues to challenge conventional cybersecurity mechanisms. Static rule-based scanners and opaque deep learning models often lack the precision and contextual understanding required for both accurate detection and analyst interpretability. This [...] Read more.
Objectives: The persistent evolution of software vulnerabilities—spanning novel zero-day exploits to logic-level flaws—continues to challenge conventional cybersecurity mechanisms. Static rule-based scanners and opaque deep learning models often lack the precision and contextual understanding required for both accurate detection and analyst interpretability. This paper presents a hybrid framework for real-time vulnerability detection that improves both robustness and explainability. Methods: The framework integrates semantic encoding via Bidirectional Encoder Representations from Transformers (BERTs), structural analysis using Deep Graph Convolutional Neural Networks (DGCNNs), and lightweight prioritization through Kernel Extreme Learning Machines (KELMs). The architecture incorporates Minimum Intermediate Representation (MIR) learning to reduce false positives and fuses multi-modal data (source code, execution traces, textual metadata) for robust, scalable performance. Explainable Artificial Intelligence (XAI) visualizations—combining SHAP-based attributions and CVSS-aligned pair plots—serve as an analyst-facing interpretability layer. The framework is evaluated on benchmark datasets, including VulnDetect and the NIST Software Reference Library (NSRL, version 2024.12.1, used strictly as a benign baseline for false positive estimation). Results: Our evaluation reports that precision, recall, AUPRC, MCC, and calibration (ECE/Brier score) demonstrated improved robustness and reduced false positives compared to baselines. An internal interpretability validation was conducted to align SHAP/GNNExplainer outputs with known vulnerability features; formal usability testing with practitioners is left as future work. Conclusions: The framework, Designed with DevSecOps integration in mind, the system is packaged in containerized modules (Docker/Kubernetes) and outputs SIEM-compatible alerts, enabling potential compatibility with Splunk, GitLab CI/CD, and similar tools. While full enterprise deployment was not performed, these deployment-oriented design choices support scalability and practical adoption. Full article
Show Figures

Figure 1

23 pages, 2216 KB  
Article
An Adaptive Application-Aware Dynamic Load Balancing Framework for Open-Source SD-WAN
by Teodor Petrović, Aleksa Vidaković, Ilija Doknić, Mladen Veinović and Živko Bojović
Sensors 2025, 25(17), 5516; https://doi.org/10.3390/s25175516 - 4 Sep 2025
Viewed by 323
Abstract
Traditional Software-Defined Wide Area Network (SD-WAN) solutions lack adaptive load-balancing mechanisms, leading to inefficient traffic distribution, increased latency, and performance degradation. This paper presents an Application-Aware Dynamic Load Balancing (AADLB) framework designed for open-source SD-WAN environments. The proposed solution enables dynamic traffic routing [...] Read more.
Traditional Software-Defined Wide Area Network (SD-WAN) solutions lack adaptive load-balancing mechanisms, leading to inefficient traffic distribution, increased latency, and performance degradation. This paper presents an Application-Aware Dynamic Load Balancing (AADLB) framework designed for open-source SD-WAN environments. The proposed solution enables dynamic traffic routing based on real-time network performance indicators, including CPU utilization, memory usage, connection delay, and packet loss, while considering application-specific requirements. Unlike conventional load-balancing methods, such as Weighted Round Robin (WRR), Weighted Fair Queuing (WFQ), Priority Queuing (PQ), and Deficit Round Robin (DRR), AADLB continuously updates traffic weights based on application requirements and network conditions, ensuring optimal resource allocation and improved Quality of Service (QoS). The AADLB framework leverages a heuristic-based dynamic weight assignment algorithm to redistribute traffic in a multi-cloud environment, mitigating congestion and enhancing system responsiveness. Experimental results demonstrate that compared to these traditional algorithms, the proposed AADLB framework improved CPU utilization by an average of 8.40%, enhanced CPU stability by 76.66%, increased RAM utilization stability by 6.97%, slightly reduced average latency by 2.58%, and significantly enhanced latency consistency by 16.74%. These improvements enhance SD-WAN scalability, optimize bandwidth usage, and reduce operational costs. Our findings highlight the potential of application-aware dynamic load balancing in SD-WAN, offering a cost-effective and scalable alternative to proprietary solutions. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

28 pages, 2031 KB  
Article
EMBRAVE: EMBedded Remote Attestation and Verification framEwork
by Enrico Bravi, Alessio Claudio, Antonio Lioy and Andrea Vesco
Sensors 2025, 25(17), 5514; https://doi.org/10.3390/s25175514 - 4 Sep 2025
Viewed by 284
Abstract
The Internet of Things (IoT) is a growing area of interest with an increasing number of applications, including cyber–physical systems (CPS). Emerging threats in the IoT context make software integrity verification a key solution for checking that IoT platforms have not been tampered [...] Read more.
The Internet of Things (IoT) is a growing area of interest with an increasing number of applications, including cyber–physical systems (CPS). Emerging threats in the IoT context make software integrity verification a key solution for checking that IoT platforms have not been tampered with so that they behave as expected. Trusted Computing techniques, in particular Remote Attestation (RA), can address this critical need. RA techniques allow a trusted third party (Verifier) to verify the software integrity of a remote platform (Attester). RA techniques rely on the presence of a secure element on the device that acts as a Root of Trust (RoT). Several specifications have been proposed to build RoTs, such as the Trusted Platform Module (TPM), the Device Identifier Composition Engine (DICE), and the Measurement and Attestation RootS (MARS). IoT contexts are often characterized by a highly dynamic scenario where platforms are constantly joining and leaving networks. This condition can be challenging for RA techniques as they need to be aware of the nodes that make up the network. This paper presents the EMBedded Remote Attestation and Verification framEwork (EMBRAVE). It is a TPM-based RA framework designed to provide a dynamic and scalable solution for RA in IoT networks. To support dynamic networks, we designed and developed Join and Leave Protocols, permitting attestation of devices that are not directly under the control of the network owner. This paper discusses the design and open-source implementation of EMBRAVE and presents experimental results demonstrating its effectiveness. Full article
Show Figures

Figure 1

32 pages, 725 KB  
Review
Mapping the Use of Bibliometric Software and Methodological Transparency in Literature Review Studies: A Comparative Analysis of China-Affiliated and Non-China-Affiliated Research Communities (2015–2024)
by Altyeb Ali Abaker Omer and Yajie Dong
Publications 2025, 13(3), 40; https://doi.org/10.3390/publications13030040 - 3 Sep 2025
Viewed by 346
Abstract
The growing use of bibliometric methods in literature reviews has intensified concerns about methodological transparency and consistency. This study compares English-language reviews authored by China-affiliated and non-China-affiliated researchers between 2015 and 2024. Through bibliometric content analysis and co-word network mapping, it evaluates the [...] Read more.
The growing use of bibliometric methods in literature reviews has intensified concerns about methodological transparency and consistency. This study compares English-language reviews authored by China-affiliated and non-China-affiliated researchers between 2015 and 2024. Through bibliometric content analysis and co-word network mapping, it evaluates the following: (1) the use and purposes of bibliometric software; (2) the clarity of methodological reporting, including software versions, threshold settings, data preprocessing, and database selection; (3) the extent to which limitations are acknowledged and recommendations proposed; and (4) the dominant conceptual themes shaping research practices. The analysis covers 50 highly cited reviews (25 per group) and 4000 additional papers for thematic mapping. Findings show both convergence and divergence: while tools such as VOSviewer, CiteSpace, Gephi, and Bibliometrix are widely adopted, non-China-affiliated studies exhibit greater transparency and reflexivity, whereas China-affiliated research often emphasizes output metrics and underreports methodological challenges. These contrasts reflect broader epistemological norms and research cultures. This study underscores the need for unified reporting standards and contributes to meta-research by offering practical guidance to improve the transparency, comparability, and rigor of bibliometric-supported literature reviews. Full article
Show Figures

Figure 1

24 pages, 4832 KB  
Article
Potential Use of BME Development Kit and Machine Learning Methods for Odor Identification: A Case Study
by José Pereira, Afonso Mota, Pedro Couto, António Valente and Carlos Serôdio
Appl. Sci. 2025, 15(17), 9687; https://doi.org/10.3390/app15179687 - 3 Sep 2025
Viewed by 303
Abstract
Ensuring food quality and safety is a growing challenge in the food industry, where early detection of contamination or spoilage is crucial. Using gas sensors combined with Artificial Intelligence (AI) offers an innovative and effective approach to food identification, improving quality control and [...] Read more.
Ensuring food quality and safety is a growing challenge in the food industry, where early detection of contamination or spoilage is crucial. Using gas sensors combined with Artificial Intelligence (AI) offers an innovative and effective approach to food identification, improving quality control and minimizing health risks. This study aims to evaluate food identification strategies using supervised learning techniques applied to data collected by the BME Development Kit, equipped with the BME688 sensor. The dataset includes measurements of temperature, pressure, humidity, and, particularly, gas composition, ensuring a comprehensive analysis of food characteristics. The methodology explores two strategies: a neural network model trained using Bosch BME AI-Studio software, and a more flexible, customizable approach that applies multiple predictive algorithms, including DT, LR, kNN, NB, and SVM. The experiments were conducted to analyze the effectiveness of both approaches in classifying different food samples based on gas emissions and environmental conditions. The results demonstrate that combining electronic noses (E-Noses) with machine learning (ML) provides high accuracy in food identification. While the neural network model from Bosch follows a structured and optimized learning approach, the second methodology enables a more adaptable exploration of various algorithms, offering greater interpretability and customization. Both approaches yielded high predictive performance, with strong classification accuracy across multiple food samples. However, performance variations depend on the characteristics of the dataset and the algorithm selection. A critical analysis suggests that optimizing sensor calibration, feature selection, and consideration of environmental parameters can further enhance accuracy. This study confirms the relevance of AI-driven gas analysis as a promising tool for food quality assessment. Full article
Show Figures

Figure 1

19 pages, 2442 KB  
Article
Extending a Moldable Computer Architecture to Accelerate DL Inference on FPGA
by Mirko Mariotti, Giulio Bianchini, Igor Neri, Daniele Spiga, Diego Ciangottini and Loriano Storchi
Electronics 2025, 14(17), 3518; https://doi.org/10.3390/electronics14173518 - 3 Sep 2025
Viewed by 343
Abstract
Over Over the past years, the field of Machine Learning (ML) and Deep Learning (DL) has seen strong developments both in terms of software and hardware, with the increase of specialized devices. One of the biggest challenges in this field is the inference [...] Read more.
Over Over the past years, the field of Machine Learning (ML) and Deep Learning (DL) has seen strong developments both in terms of software and hardware, with the increase of specialized devices. One of the biggest challenges in this field is the inference phase, where the trained model makes predictions of unseen data. Although computationally powerful, traditional computing architectures face limitations in efficiently managing requests, especially from an energy point of view. For this reason, the need arose to find alternative hardware solutions, and among these, there are Field Programmable Gate Arrays (FPGAs): their key feature of being reconfigurable, combined with parallel processing capability, low latency and low power consumption, makes those devices uniquely suited to accelerating inference tasks. In this paper, we present a novel approach to accelerate the inference phase of a multi-layer perceptron (MLP) using BondMachine framework, an OpenSource framework for the design of hardware accelerators for FPGAs. Analysis of the latency, energy consumption, and resource usage, as well as comparisons with respect to standard architectures and other FPGA approaches, is presented, highlighting the strengths and critical points of the proposed solution. The present work represents an exploratory study to validate the proposed methodology on MLP architectures, establishing a crucial foundation for future work on scalability and the acceleration of more complex neural network models. Full article
(This article belongs to the Special Issue Advancements in Hardware-Efficient Machine Learning)
Show Figures

Figure 1

21 pages, 6516 KB  
Article
Investigation of Borehole Network Parameters for Rock Breaking via High-Pressure Gas Expansion in Subway Safety Passages of Environmentally Sensitive Zones
by Dunwen Liu, Jimin Zhong, Yupeng Zhang and Yuhui Jin
Buildings 2025, 15(17), 3158; https://doi.org/10.3390/buildings15173158 - 2 Sep 2025
Viewed by 326
Abstract
To address the challenge of determining the borehole layout scheme in the practical application of high-pressure gas expansion rock breaking, this study takes the excavation of the safety passage at Kaixuan Road Station on the North Extension Line 2 of Chongqing Metro Line [...] Read more.
To address the challenge of determining the borehole layout scheme in the practical application of high-pressure gas expansion rock breaking, this study takes the excavation of the safety passage at Kaixuan Road Station on the North Extension Line 2 of Chongqing Metro Line 18 as the engineering background. The rock-breaking capacity was evaluated by analyzing the damaged zone volume caused by gas expansion using FLAC3D 6.0 numerical simulation software, and vibration monitoring was conducted for the historical buildings on the surface. This study revealed the following: (1) When the borehole depth is 1.2 m and the charge length is 0.6 m, the optimal angle is 70°, with the optimal vertical and horizontal spacing between holes being 1200 mm and 2000 mm, respectively. (2) The numerical simulations indicated that by adjusting the charge density, the optimized sandstone borehole network parameters could be applied to mudstone strata, and the rock-breaking effect was similar. The difference in the volume of the damaged zones obtained in the two strata was less than 3%. (3) The vibration analysis demonstrated that the peak particle velocity generated by high-pressure gas expansion rock fracturing at the ancient building directly above was 0.06316 cm/s, which was lower than the threshold value of 0.1 cm/s and approximately 67.95% lower than that of explosive blasting. Furthermore, when the tunnel depth exceeded 29 m, the vibration velocity of surface structures remained within the safety range. The results verified the feasibility of applying the same borehole network parameters to different strata, providing theoretical support for the practical application of high-pressure gas expansion rock fracturing technology in engineering projects. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

Back to TopTop