Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,533)

Search Parameters:
Keywords = user requirements

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2736 KB  
Article
The Rise of Hacking in Integrated EHR Systems: A Trend Analysis of U.S. Healthcare Data Breaches
by Benjamin Yankson, Mehdi Barati, Rebecca Bondzie and Ram Madani
J. Cybersecur. Priv. 2025, 5(3), 70; https://doi.org/10.3390/jcp5030070 - 5 Sep 2025
Abstract
Electronic health record (EHR) data breaches create severe concerns for patients’ privacy, safety, and risk of loss for healthcare entities responsible for managing patient health records. EHR systems collect a vast amount of user-sensitive data, requiring integration, implementation, and the application of essential [...] Read more.
Electronic health record (EHR) data breaches create severe concerns for patients’ privacy, safety, and risk of loss for healthcare entities responsible for managing patient health records. EHR systems collect a vast amount of user-sensitive data, requiring integration, implementation, and the application of essential security principles, controls, and strategies to safeguard against persistent adversary attacks. This research is an exploratory study into current integrated EHR cybersecurity attacks using United States Health Insurance Portability and Accountability Act (HIPAA) privacy and security breach reported data. This work investigates if current EHR implementation lacks the requisite security control to prevent a cyber breach and protect user privacy. We conduct descriptive and trend analysis to describe, demonstrate, summarize data points, and predict direction based on current and historical data by covered entity, type of breaches, and point of breaches (examine, attack methods, patterns, and location of breach information). An Autoregressive Integrated Moving Average (ARIMA) model is used to provide a detailed analysis of the data demonstrating breaches caused by hacking and IT incidents show a significant trend (coefficient 0.84, p-value < 2.2 × 10−16 ***). The findings reveal a consistent rise in breaches—particularly from hacking and IT incidents—disproportionately affecting healthcare providers. The study highlights that EHR data breaches often follow recurring patterns, indicating common vulnerabilities, and underlines the need for prioritized, data-driven security investments. These findings validate the hypothesis that most EHR cybersecurity attacks are concentrated using similar attack methodologies and face common vulnerabilities and demonstrate the value of targeted mitigation strategies to strengthen healthcare cybersecurity. The findings highlight the urgent need for healthcare organizations and policymakers to prioritize targeted, data-driven security investments and enforce stricter controls to protect EHR systems from increasingly frequent and predictable cyberattacks. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—2nd Edition)
Show Figures

Figure 1

28 pages, 15252 KB  
Article
1D-CNN-Based Performance Prediction in IRS-Enabled IoT Networks for 6G Autonomous Vehicle Applications
by Radwa Ahmed Osman
Future Internet 2025, 17(9), 405; https://doi.org/10.3390/fi17090405 - 5 Sep 2025
Abstract
To foster the performance of wireless communication while saving energy, the integration of Intelligent Reflecting Surfaces (IRS) into autonomous vehicle (AV) communication networks is considered a powerful technique. This paper proposes a novel IRS-assisted vehicular communication model that combines Lagrange optimization and Gradient-Based [...] Read more.
To foster the performance of wireless communication while saving energy, the integration of Intelligent Reflecting Surfaces (IRS) into autonomous vehicle (AV) communication networks is considered a powerful technique. This paper proposes a novel IRS-assisted vehicular communication model that combines Lagrange optimization and Gradient-Based Phase Optimization to determine the optimal transmission power, optimal interference transmission power, and IRS phase shifts. Additionally, the proposed model help increase the Signal-to-Interference-plus-Noise Ratio (SINR) by utilizing IRS, which leads to maximizes energy efficiency and the achievable data rate under a variety of environmental conditions, while guaranteeing that resource limits are satisfied. In order to represent dense vehicular environments, practical constraints for the system model, such as IRS reflection efficiency and interference, have been incorporated from multiple sources, namely, Device-to-Device (D2D), Vehicle-to-Vehicle (V2V), Vehicle-to-Base Station (V2B), and Cellular User Equipment (CUE). A Lagrangian optimization approach has been implemented to determine the required transmission interference power and the best IRS phase designs in order to enhance the system performance. Consequently, a one-dimensional convolutional neural network has been implemented for the optimized data provided by this framework as training input. This deep learning algorithm learns to predict the required optimal IRS settings quickly, allowing for real-time adaptation in dynamic wireless environments. The obtained results from the simulation show that the combined optimization and prediction strategy considerably enhances the system reliability and energy efficiency over baseline techniques. This study lays a solid foundation for implementing IRS-assisted AV networks in real-world settings, hence facilitating the development of next-generation vehicular communication systems that are both performance-driven and energy-efficient. Full article
27 pages, 8405 KB  
Article
A Stereo Synchronization Method for Consumer-Grade Video Cameras to Measure Multi-Target 3D Displacement Using Image Processing in Shake Table Experiments
by Mearge Kahsay Seyfu and Yuan-Sen Yang
Sensors 2025, 25(17), 5535; https://doi.org/10.3390/s25175535 - 5 Sep 2025
Abstract
The use of consumer-grade cameras for stereo vision provides a cost-effective, non-contact method for measuring three-dimensional displacement in civil engineering experiments. However, obtaining accurate 3D coordinates requires accurate temporal alignment of several unsynchronized cameras, which is often lacking in consumer-grade devices. Current synchronization [...] Read more.
The use of consumer-grade cameras for stereo vision provides a cost-effective, non-contact method for measuring three-dimensional displacement in civil engineering experiments. However, obtaining accurate 3D coordinates requires accurate temporal alignment of several unsynchronized cameras, which is often lacking in consumer-grade devices. Current synchronization software methods usually only achieve precision at the frame level. As a result, they fall short for high-frequency shake table experiments, where even minor timing differences can cause significant triangulation errors. To address this issue, we propose a novel image-based synchronization method and a graphical user interface (GUI)-based software for acquiring stereo videos during shake table testing. The proposed method estimates the time lag between unsynchronized videos by minimizing reprojection errors. Then, the estimate is refined to sub-frame accuracy using polynomial interpolation. This method was validated using a high-precision motion capture system (Mocap) as a benchmark through large- and small-scale experiments. The proposed method reduces the RMSE of triangulation by up to 78.79% and achieves maximum displacement errors of less than 1 mm for small-scale experiments. The proposed approach reduces the RMSE of displacement measurements by 94.21% and 62.86% for small- and large-scale experiments, respectively. The results demonstrate the effectiveness of the proposed method for precise 3D displacement measurement with low-cost equipment. This method offers a practical alternative to expensive vision-based measurement systems commonly used in structural dynamics research. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

28 pages, 8417 KB  
Article
Democratizing IoT for Smart Irrigation: A Cost-Effective DIY Solution Proposal Evaluated in an Actinidia Orchard
by David Pascoal, Telmo Adão, Agnieszka Chojka, Nuno Silva, Sandra Rodrigues, Emanuel Peres and Raul Morais
Algorithms 2025, 18(9), 563; https://doi.org/10.3390/a18090563 - 5 Sep 2025
Abstract
Proper management of water resources in agriculture is of utmost importance for sustainable productivity, especially under the current context of climate change. However, many smart agriculture systems, including for managing irrigation, involve costly, complex tools for most farmers, especially small/medium-scale producers, despite the [...] Read more.
Proper management of water resources in agriculture is of utmost importance for sustainable productivity, especially under the current context of climate change. However, many smart agriculture systems, including for managing irrigation, involve costly, complex tools for most farmers, especially small/medium-scale producers, despite the availability of user-friendly and community-accessible tools supported by well-established providers (e.g., Google). Hence, this paper proposes an irrigation management system integrating low-cost Internet of Things (IoT) sensors with community-accessible cloud-based data management tools. Specifically, it resorts to sensors managed by an ESP32 development board to monitor several agroclimatic parameters and employs Google Sheets for data handling, visualization, and decision support, assisting operators in carrying out proper irrigation procedures. To ensure reproducibility for both digital experts but mainly non-technical professionals, a comprehensive set of guidelines is provided for the assembly and configuration of the proposed irrigation management system, aiming to promote a democratized dissemination of key technical knowledge within a do-it-yourself (DIY) paradigm. As part of this contribution, a market survey identified numerous e-commerce platforms that offer the required components at competitive prices, enabling the system to be affordably replicated. Furthermore, an irrigation management prototype was tested in a real production environment, consisting of a 2.4-hectare yellow kiwi orchard managed by an association of producers from July to September 2021. Significant resource reductions were achieved by using low-cost IoT devices for data acquisition and the capabilities of accessible online tools like Google Sheets. Specifically, for this study, irrigation periods were reduced by 62.50% without causing water deficits detrimental to the crops’ development. Full article
Show Figures

Figure 1

18 pages, 4559 KB  
Article
Automating Leaf Area Measurement in Citrus: The Development and Validation of a Python-Based Tool
by Emilio Suarez, Manuel Blaser and Mary Sutton
Appl. Sci. 2025, 15(17), 9750; https://doi.org/10.3390/app15179750 - 5 Sep 2025
Abstract
Leaf area is a critical trait in plant physiology and agronomy, yet conventional measurement approaches such as those using ImageJ remain labor-intensive, user-dependent, and difficult to scale for high-throughput phenotyping. To address these limitations, we developed a fully automated, open-source Python tool for [...] Read more.
Leaf area is a critical trait in plant physiology and agronomy, yet conventional measurement approaches such as those using ImageJ remain labor-intensive, user-dependent, and difficult to scale for high-throughput phenotyping. To address these limitations, we developed a fully automated, open-source Python tool for quantifying citrus leaf area from scanned images using multi-mask HSV segmentation, contour-hierarchy filtering, and batch calibration. The tool was validated against ImageJ across 11 citrus cultivars (n = 412 leaves), representing a broad range of leaf sizes and morphologies. Agreement between methods was near perfect, with correlation coefficients exceeding 0.997, mean bias within ±0.14 cm2, and error rates below 2.5%. Bland–Altman analysis confirmed narrow limits of agreement (±0.3 cm2) while scatter plots showed robust performance across both small and large leaves. Importantly, the Python tool successfully handled challenging imaging conditions, including low-contrast leaves and edge-aligned specimens, where ImageJ required manual intervention. Processing efficiency was markedly improved, with the full dataset analyzed in 7 s compared with over 3 h using ImageJ, representing a >1600-fold speed increase. By eliminating manual thresholding and reducing user variability, this tool provides a reliable, efficient, and accessible framework for high-throughput leaf area quantification, advancing reproducibility and scalability in digital phenotyping. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Precision Agriculture)
Show Figures

Figure 1

13 pages, 1288 KB  
Article
Social Trusty Algorithm: A New Algorithm for Computing the Trust Score Between All Entities in Social Networks Based on Linear Algebra
by Esra Karadeniz Köse and Ali Karcı
Appl. Sci. 2025, 15(17), 9744; https://doi.org/10.3390/app15179744 - 4 Sep 2025
Abstract
The growing importance of social networks has led to increased research into trust estimation and interpretation among network entities. It is important to predict the trust score between users in order to minimize the risks in user interactions. This article enables the identification [...] Read more.
The growing importance of social networks has led to increased research into trust estimation and interpretation among network entities. It is important to predict the trust score between users in order to minimize the risks in user interactions. This article enables the identification of the most reliable and least reliable entities in a network by expressing trust scores numerically. In this paper, the social network is modeled as a graph, and trust scores are calculated by taking the powers of the ratio matrix between entities and summing them. Taking the power of the proportion matrix based on the number of entities in the network requires a lot of arithmetic load. After taking the powers of the eigenvalues of the ratio matrix, these are multiplied by the eigenvector matrix to obtain the power of the ratio matrix. In this way, the arithmetic cost required for calculating trust between entities is reduced. This paper calculates the trust score between entities using linear algebra techniques to reduce the arithmetic load. Trust detection algorithms use shortest paths and similar methods to eliminate paths that are deemed unimportant, which makes the result questionable because of the loss of data. The novelty of this method is that it calculates the trust score without the need for explicit path numbering and without any data loss. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
17 pages, 1294 KB  
Article
SPARSE-OTFS-Net: A Sparse Robust OTFS Signal Detection Algorithm for 6G Ubiquitous Coverage
by Yunzhi Ling and Jun Xu
Electronics 2025, 14(17), 3532; https://doi.org/10.3390/electronics14173532 - 4 Sep 2025
Abstract
With the evolution of 6G technology toward global coverage and multidimensional integration, OTFS modulation has become a research focus due to its advantages in high-mobility scenarios. However, existing OTFS signal detection algorithms face challenges such as pilot contamination, Doppler spread degradation, and diverse [...] Read more.
With the evolution of 6G technology toward global coverage and multidimensional integration, OTFS modulation has become a research focus due to its advantages in high-mobility scenarios. However, existing OTFS signal detection algorithms face challenges such as pilot contamination, Doppler spread degradation, and diverse interference in complex environments. This paper proposes the SPARSE-OTFS-Net algorithm, which establishes a comprehensive signal detection solution by innovatively integrating sparse random pilot design, compressive sensing-based frequency offset estimation with closed-loop cancellation, and joint denoising techniques combining an autoencoder, residual learning, and multi-scale feature fusion. The algorithm employs deep learning to dynamically generate non-uniform pilot distributions, reducing pilot contamination by 60%. Through orthogonal matching pursuit algorithms, it achieves super-resolution frequency offset estimation with tracking errors controlled within 20 Hz, effectively addressing Doppler spread degradation. The multi-stage denoising mechanism of deep neural networks suppresses various interferences while preserving time-frequency domain signal sparsity. Simulation results demonstrate: Under large frequency offset, multipath, and low SNR conditions, multi-kernel convolution technology achieves significant computational complexity reduction while exhibiting outstanding performance in tracking error and weak multipath detection. In 1000 km/h high-speed mobility scenarios, Doppler error estimation accuracy reaches ±25 Hz (approaching the Cramér-Rao bound), with BER performance of 5.0 × 10−6 (7× improvement over single-Gaussian CNN’s 3.5 × 10−5). In 1024-user interference scenarios with BER = 10−5 requirements, SNR demand decreases from 11.4 dB to 9.2 dB (2.2 dB reduction), while maintaining EVM at 6.5% under 1024-user concurrency (compared to 16.5% for conventional MMSE), effectively increasing concurrent user capacity in 6G ultra-massive connectivity scenarios. These results validate the superior performance of SPARSE-OTFS-Net in 6G ultra-massive connectivity applications and provide critical technical support for realizing integrated space–air–ground networks. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

22 pages, 9741 KB  
Article
Augminded: Ambient Mirror Display Notifications
by Timo Götzelmann, Pascal Karg and Mareike Müller
Multimodal Technol. Interact. 2025, 9(9), 93; https://doi.org/10.3390/mti9090093 - 4 Sep 2025
Abstract
This paper presents a new approach for providing contextual information in real-world environments. Our approach is consciously designed to be low-threshold; by using mirrors as augmented reality surfaces, no devices such as AR glasses or smartphones have to be worn or held by [...] Read more.
This paper presents a new approach for providing contextual information in real-world environments. Our approach is consciously designed to be low-threshold; by using mirrors as augmented reality surfaces, no devices such as AR glasses or smartphones have to be worn or held by the user. It enables technical and non-technical objects in the environment to be visually highlighted and thus subtly draw the attention of people passing by. The presented technology enables the provision of information that can be viewed in more detail by the user if required by slowing down their movement. Users can decide whether this is relevant to them or not. A prototype system was implemented and evaluated through a user study. The results show a high level of acceptance and intuitive usability of the system, with participants being able to reliably perceive and process the information displayed. The technology thus offers promising potential for the unobtrusive and context-sensitive provision of information in various application areas. The paper discusses limitations of the system and outlines future research directions to further optimize the technology and extend its applicability. Full article
Show Figures

Figure 1

16 pages, 3781 KB  
Systematic Review
Augmented Reality in Dental Extractions: Narrative Review and an AR-Guided Impacted Mandibular Third-Molar Case
by Gerardo Pellegrino, Carlo Barausse, Subhi Tayeb, Elisabetta Vignudelli, Martina Casaburi, Stefano Stradiotti, Fabrizio Ferretti, Laura Cercenelli, Emanuela Marcelli and Pietro Felice
Appl. Sci. 2025, 15(17), 9723; https://doi.org/10.3390/app15179723 - 4 Sep 2025
Abstract
Background: Augmented-reality (AR) navigation is emerging as a means of turning pre-operative cone-beam CT data into intuitive, in situ guidance for difficult tooth removal, yet the scattered evidence has never been consolidated nor illustrated with a full clinical workflow. Aims: This [...] Read more.
Background: Augmented-reality (AR) navigation is emerging as a means of turning pre-operative cone-beam CT data into intuitive, in situ guidance for difficult tooth removal, yet the scattered evidence has never been consolidated nor illustrated with a full clinical workflow. Aims: This study aims to narratively synthesise AR applications limited to dental extractions and to illustrate a full AR-guided clinical workflow. Methods: We performed a PRISMA-informed narrative search (PubMed + Cochrane, January 2015–June 2025) focused exclusively on AR applications in dental extractions and found nine eligible studies. Results: These pilot reports—covering impacted third molars, supernumerary incisors, canines, and cyst-associated teeth—all used marker-less registration on natural dental surfaces and achieved mean target-registration errors below 1 mm with headset set-up times under three minutes; the only translational series (six molars) recorded a mean surgical duration of 21 ± 6 min and a System Usability Scale score of 79. To translate these findings into practice, we describe a case of AR-guided mandibular third-molar extraction. A QR-referenced 3D-printed splint, intra-oral scan, and CBCT were fused to create a colour-coded hologram rendered on a Magic Leap 2 headset. The procedure took 19 min and required only a conservative osteotomy and accurate odontotomy that ended without neurosensory disturbance (VAS pain 2/10 at one week). Conclusions: Collectively, the literature synthesis and clinical demonstration suggest that current AR platforms deliver sub-millimetre accuracy, minimal workflow overhead, and high user acceptance in high-risk extractions while highlighting the need for larger, controlled trials to prove tangible patient benefit. Full article
Show Figures

Figure 1

13 pages, 2058 KB  
Article
Development of a Spatial Alignment System for Interacting with BIM Objects in Mixed Reality
by Jaehong Cho, Sungpyo Kim and Sanghyeok Kang
Appl. Sci. 2025, 15(17), 9713; https://doi.org/10.3390/app15179713 - 4 Sep 2025
Abstract
This study proposes a Two-points Spatial Alignment System (TSAS) for accurate positioning of Building Information Modeling (BIM) objects in Mixed Reality (MR) environments at construction sites. Conventional spatial alignment methods present limitations: marker-based approaches require precise marker installation and setup in predefined locations, [...] Read more.
This study proposes a Two-points Spatial Alignment System (TSAS) for accurate positioning of Building Information Modeling (BIM) objects in Mixed Reality (MR) environments at construction sites. Conventional spatial alignment methods present limitations: marker-based approaches require precise marker installation and setup in predefined locations, while drag-based methods rely considerably on user manipulation skills. TSAS utilizes Y-axis rotation and vector-based scaling mechanisms to facilitate alignment processes. Through usability evaluation with 30 participants in MR environments, TSAS demonstrated a performance with a 50.3 mm alignment error, compared to marker-based (64.0 mm) and drag methods (199.7 mm). A one-way Analysis of Variance (ANOVA) confirmed that these differences in accuracy were statistically significant (p < 0.001). Notably, TSAS meets the Korean building regulation’s tolerance while maintaining consistent accuracy in indoor environments. Although the marker method showed better efficiency in operation time, this evaluation excluded initial installation time requirements. The usability evaluation suggests this approach could be beneficial for BIM visualization and review processes in construction settings. Future research will focus on validating the system’s performance in diverse construction environments, including larger buildings and complex sites. Full article
Show Figures

Figure 1

24 pages, 1766 KB  
Article
Evaluating Interaction Capability in a Serious Game for Children with ASD: An Operability-Based Approach Aligned with ISO/IEC 25010:2023
by Delia Isabel Carrión-León, Milton Paúl Lopez-Ramos, Luis Gonzalo Santillan-Valdiviezo, Damaris Sayonara Tanguila-Tapuy, Gina Marilyn Morocho-Santos, Raquel Johanna Moyano-Arias, María Elena Yautibug-Apugllón and Ana Eva Chacón-Luna
Computers 2025, 14(9), 370; https://doi.org/10.3390/computers14090370 - 4 Sep 2025
Abstract
Serious games for children with Autism Spectrum Disorder (ASD) require rigorous evaluation frameworks that capture neurodivergent interaction patterns. This pilot study designed, developed, and evaluated a serious game for children with ASD, focusing on operability assessment aligned with ISO/IEC 25010:2023 standards. A repeated-measures [...] Read more.
Serious games for children with Autism Spectrum Disorder (ASD) require rigorous evaluation frameworks that capture neurodivergent interaction patterns. This pilot study designed, developed, and evaluated a serious game for children with ASD, focusing on operability assessment aligned with ISO/IEC 25010:2023 standards. A repeated-measures design involved ten children with ASD from the Carlos Garbay Special Education Institute in Riobamba, Ecuador, across 25 gameplay sessions. A bespoke operability algorithm incorporating four weighted components (ease of learning, user control, interface familiarity, and message comprehension) was developed through expert consultation with certified ASD therapists. Statistical study used linear mixed-effects models with Kenward–Roger correction, supplemented by thorough validation including split-half reliability and partial correlations. The operability metric demonstrated excellent internal consistency (split-half reliability = 0.94, 95% CI [0.88, 0.97]) and construct validity through partial correlations controlling for performance (difficulty: r_partial = 0.42, p = 0.037). Eighty percent of sessions achieved moderate-to-high operability levels (M = 45.07, SD = 10.52). In contrast to requirements, operability consistently improved with increasing difficulty level (Easy: M = 37.04; Medium: M = 48.71; Hard: M = 53.87), indicating that individuals with enhanced capabilities advanced to harder levels. Mixed-effects modeling indicated substantial difficulty effects (H = 9.36, p = 0.009, ε2 = 0.39). This pilot study establishes preliminary evidence for operability assessment in ASD serious games, requiring larger confirmatory validation studies (n ≥ 30) to establish broader generalizability and standardized instrument integration. The positive difficulty–operability association highlights the importance of adaptive game design in supporting skill progression. Full article
(This article belongs to the Section Human–Computer Interactions)
Show Figures

Figure 1

24 pages, 1270 KB  
Article
Data-Driven Requirements Elicitation from App Reviews Framework Based on BERT
by Fatma A. Mihany, Galal H. Galal-Edeen, Ehab E. Hassanein and Hanan Moussa
Appl. Sci. 2025, 15(17), 9709; https://doi.org/10.3390/app15179709 - 4 Sep 2025
Abstract
Market-Driven Requirements Engineering (MDRE) integrates traditional Requirements Engineering (RE) practices, such as Requirements Elicitation and requirements prioritization, with market analysis. Offering software products to an open market has become a trend, yet it has many challenges. In MDRE, there are diverse sources for [...] Read more.
Market-Driven Requirements Engineering (MDRE) integrates traditional Requirements Engineering (RE) practices, such as Requirements Elicitation and requirements prioritization, with market analysis. Offering software products to an open market has become a trend, yet it has many challenges. In MDRE, there are diverse sources for requirements including support teams, subcontractors, sales, and marketing teams. So, the MDRE process must provide ongoing requirements gathering techniques to ensure no crucial requirements are overlooked. It is generally possible for users to search and download software applications through app stores (such as the Google Play Store and Apple App Store) for various purposes. Users are allowed to express their opinions about the software applications by writing text messages which are widely known as “app reviews”. Utilizing these app reviews as a source of requirements while planning to develop a similar software application may have promising results. Therefore, the concept of “App Reviews Utilization” has emerged and can be applied for various purposes. This research utilizes app reviews in Requirements Elicitation while developing a software product in the market-driven development context. Furthermore, these reviews may be noisy and informally expressed. This paper proposes a framework, Automatic Requirements Elicitation from App Reviews (AREAR), that integrates Natural Language Processing (NLP) techniques with pre-trained Language Models to automatically elicit requirements from available app reviews while developing a market-driven software product. AREAR employed the Bidirectional Encoder Representation from the Transformers (BERT) Language Model. The proposed framework achieved an improved Accuracy and F1 score as compared to previous research. Full article
Show Figures

Figure 1

30 pages, 6821 KB  
Article
Prediction of Maximum Scour Around Circular Bridge Piers Using Semi-Empirical and Machine Learning Models
by Buddhadev Nandi and Subhasish Das
Water 2025, 17(17), 2610; https://doi.org/10.3390/w17172610 - 3 Sep 2025
Abstract
Local scour around bridge piers is one of the primary causes of structural failure in bridges. Therefore, this study focuses on addressing the estimation of maximum scour depth (dsm), which is essential for safe and resilient bridge design. Many studies [...] Read more.
Local scour around bridge piers is one of the primary causes of structural failure in bridges. Therefore, this study focuses on addressing the estimation of maximum scour depth (dsm), which is essential for safe and resilient bridge design. Many studies in the last eight decades have included metadata collection and developed around 80 empirical formulas using various scour-affecting parameters of different ranges. To date, a total of 33 formulas have been comparatively analyzed and ranked based on their predictive accuracy. In this study, novel formulas using semi-empirical methods and gene expression programming (GEP) have been developed alongside an artificial neural network (ANN) model to accurately estimate dsm using 768 observed data points collected from published work, along with eight newly conducted experimental data points in the laboratory. These new formulas/models are systematically compared with 74 empirical literature formulas for their predictive capability. The influential parameters for predicting dsm are flow intensity, flow shallowness, sediment gradation, sediment coarseness, time, constriction ratio, and Froude number. Performances of the formulas are compared using different statistical metrics such as the coefficient of determination, Nash–Sutcliffe efficiency, mean bias error, and root-mean-squared error. The Gauss–Newton method is employed to solve the nonlinear least-squares problem to develop the semi-empirical formula that outperforms the literature formulas, except the formula from GEP, in terms of statistical performance metrics. However, the feed-forward ANN model outperformed the semi-empirical model during testing and validation phases, respectively, with higher CD (0.790 vs. 0.756), NSE (0.783 vs. 0.750), lower RMSE (0.289 vs. 0.301), and greater prediction accuracy (64.655% vs. 61.935%), providing approximately 15–18% greater accuracy with minimal errors and narrower uncertainty bands. Using user-friendly tools and a strong semi-empirical model, which requires no coding skills, can assist designers and engineers in making accurate predictions in practical bridge design and safety planning. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
Show Figures

Figure 1

24 pages, 2585 KB  
Article
Comprehensive Examination of Unrolled Networks for Solving Linear Inverse Problems
by Yuxi Chen, Xi Chen, Arian Maleki and Shirin Jalali
Entropy 2025, 27(9), 929; https://doi.org/10.3390/e27090929 - 3 Sep 2025
Abstract
Unrolled networks have become prevalent in various computer vision and imaging tasks. Although they have demonstrated remarkable efficacy in solving specific computer vision and computational imaging tasks, their adaptation to other applications presents considerable challenges. This is primarily due to the multitude of [...] Read more.
Unrolled networks have become prevalent in various computer vision and imaging tasks. Although they have demonstrated remarkable efficacy in solving specific computer vision and computational imaging tasks, their adaptation to other applications presents considerable challenges. This is primarily due to the multitude of design decisions that practitioners working on new applications must navigate, each potentially affecting the network’s overall performance. These decisions include selecting the optimization algorithm, defining the loss function, and determining the deep architecture, among others. Compounding the issue, evaluating each design choice requires time-consuming simulations to train, fine-tune the neural network, and optimize its performance. As a result, the process of exploring multiple options and identifying the optimal configuration becomes time-consuming and computationally demanding. The main objectives of this paper are (1) to unify some ideas and methodologies used in unrolled networks to reduce the number of design choices a user has to make, and (2) to report a comprehensive ablation study to discuss the impact of each of the choices involved in designing unrolled networks and present practical recommendations based on our findings. We anticipate that this study will help scientists and engineers to design unrolled networks for their applications and diagnose problems within their networks efficiently. Full article
Show Figures

Figure 1

19 pages, 2801 KB  
Article
Validation of a User Sketch-Based Spatial Planning Review Method in a Building Information Modeling and Virtual Reality Integrated Environment
by ByungChan Kong and WoonSeong Jeong
Buildings 2025, 15(17), 3170; https://doi.org/10.3390/buildings15173170 - 3 Sep 2025
Abstract
This study introduces a novel space feasibility assessment process and evaluates its effectiveness through a comparative analysis with a conventional manual process. The proposed method is designed to enhance spatial comprehension and integrate building performance analysis, thereby supporting budgetary considerations during the early [...] Read more.
This study introduces a novel space feasibility assessment process and evaluates its effectiveness through a comparative analysis with a conventional manual process. The proposed method is designed to enhance spatial comprehension and integrate building performance analysis, thereby supporting budgetary considerations during the early design phase. By providing a more intuitive and interactive environment, the system enables stakeholders—such as building owners—to communicate their spatial requirements to architects and professionals more clearly and efficiently. To validate the effectiveness of the proposed approach, participants completed two distinct scenarios: (1) a manual space feasibility assessment, and (2) a system-supported space feasibility assessment utilizing the proposed method. Participant performance was measured in terms of speed and accuracy in each scenario. Additionally, a user satisfaction survey was conducted to evaluate the usability of the system’s functionality. The experimental results provide an empirical basis for comparing the proposed process with the manual approach. Findings demonstrate that the proposed process enables more efficient and accurate space feasibility assessments, thereby validating its effectiveness as a user-centered decision-support tool during early-stage architectural planning. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Back to TopTop