Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (140)

Search Parameters:
Keywords = BFG

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 14342 KB  
Article
A Multi-LiDAR Self-Calibration System Based on Natural Environments and Motion Constraints
by Yuxuan Tang, Jie Hu, Zhiyong Yang, Wencai Xu, Shuaidi He and Bolun Hu
Mathematics 2025, 13(19), 3181; https://doi.org/10.3390/math13193181 - 4 Oct 2025
Viewed by 165
Abstract
Autonomous commercial vehicles often mount multiple LiDARs to enlarge their field of view, but conventional calibration is labor-intensive and prone to drift during long-term operation. We present an online self-calibration method that combines a ground plane motion constraint with a virtual RGB–D projection, [...] Read more.
Autonomous commercial vehicles often mount multiple LiDARs to enlarge their field of view, but conventional calibration is labor-intensive and prone to drift during long-term operation. We present an online self-calibration method that combines a ground plane motion constraint with a virtual RGB–D projection, mapping 3D point clouds to 2D feature/depth images to reduce feature extraction cost while preserving 3D structure. Motion consistency across consecutive frames enables a reduced-dimension hand–eye formulation. Within this formulation, the estimation integrates geometric constraints on SE(3) using Lagrange multiplier aggregation and quasi-Newton refinement. This approach highlights key aspects of identifiability, conditioning, and convergence. An online monitor evaluates plane alignment and LiDAR–INS odometry consistency to detect degradation and trigger recalibration. Tests on a commercial vehicle with six LiDARs and on nuScenes demonstrate accuracy comparable to offline, target-based methods while supporting practical online use. On the vehicle, maximum errors are 6.058 cm (translation) and 4.768° (rotation); on nuScenes, 2.916 cm and 5.386°. The approach streamlines calibration, enables online monitoring, and remains robust in real-world settings. Full article
(This article belongs to the Section A: Algebra and Logic)
Show Figures

Figure 1

19 pages, 408 KB  
Article
Exploring Symmetry Structures in Integrity-Based Vulnerability Analysis Using Bipolar Fuzzy Graph Theory
by Muflih Alhazmi, Gangatharan Venkat Narayanan, Perumal Chellamani and Shreefa O. Hilali
Symmetry 2025, 17(9), 1552; https://doi.org/10.3390/sym17091552 - 16 Sep 2025
Viewed by 266
Abstract
The integrity parameter in vulnerability refers to a set of removed vertices and the maximum number of connected components that remain functional. A bipolar fuzzy graph (BFG) assigns membership values to both positive and negative attributes. A new parameter, integrity, is defined and [...] Read more.
The integrity parameter in vulnerability refers to a set of removed vertices and the maximum number of connected components that remain functional. A bipolar fuzzy graph (BFG) assigns membership values to both positive and negative attributes. A new parameter, integrity, is defined and discussed using an example of a BFG. The integrity value of a special type of graph is determined, and the node strength sequence (NSS) for BFG is introduced. Specific NSS values are used to discuss the integrity values of paths and cycles. The integrity of the union, join, and Cartesian product of two BFGs is presented. This parameter is then applied to a road network with both positive and negative attributes, and the findings are discussed with a conclusion. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

25 pages, 2907 KB  
Article
Benchmarking ML Algorithms Against Traditional Correlations for Dynamic Monitoring of Bottomhole Pressure in Nitrogen-Lifted Wells
by Samuel Nashed and Rouzbeh Moghanloo
Processes 2025, 13(9), 2820; https://doi.org/10.3390/pr13092820 - 3 Sep 2025
Viewed by 482
Abstract
Proper estimation of flowing bottomhole pressure at coiled tubing depth (BHP-CTD) is crucial in optimization of nitrogen lifting operations in oil wells. Conventional estimation techniques such as empirical correlations and mechanistic models may be characterized by poor generalizability, low accuracy, and inapplicability in [...] Read more.
Proper estimation of flowing bottomhole pressure at coiled tubing depth (BHP-CTD) is crucial in optimization of nitrogen lifting operations in oil wells. Conventional estimation techniques such as empirical correlations and mechanistic models may be characterized by poor generalizability, low accuracy, and inapplicability in real time. This study overcomes these shortcomings by developing and comparing sixteen machine learning (ML) regression models, such as neural networks and genetic programming-based symbolic regression, in order to predict BHP-CTD with field data collected on 518 oil wells. Operational parameters that were used to train the models included fluid flow rate, gas–oil ratio, coiled tubing depth, and nitrogen rate. The best performance was obtained with the neural network with the L-BFGS optimizer (R2 = 0.987) and the low error metrics (RMSE = 0.014, MAE = 0.011). An interpretable equation with R2 = 0.94 was also obtained through a symbolic regression model. The robustness of the model was confirmed by both k-fold and random sampling validation, and generalizability was also confirmed using blind validation on data collected on 29 wells not included in the training set. The ML models proved to be more accurate, adaptable, and real-time applicable as compared to empirical correlations such as Hagedorn and Brown, Beggs and Brill, and Orkiszewski. This study does not only provide a cost-efficient alternative to downhole pressure gauges but also adds an interpretable, data-driven framework to increase the efficiency of nitrogen lifting in various operational conditions. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

33 pages, 4628 KB  
Article
A Robust Aerodynamic Design Optimization Methodology for UAV Airfoils Based on Stochastic Surrogate Model and PPO-Clip Algorithm
by Yiyu Wang, Yuxin Huo, Zhilong Zhong, Renxing Ji, Yang Chen, Bo Wang and Xiaoping Ma
Drones 2025, 9(9), 607; https://doi.org/10.3390/drones9090607 - 28 Aug 2025
Viewed by 635
Abstract
Unmanned Aerial Vehicles (UAVs) are widely used in meteorology and logistics due to their unique advantages nowadays. During their lifecycle, uncertainties—such as flight condition variations—can significantly affect both design and performance, making Robust Aerodynamic Design Optimization (RADO) essential. However, existing RADO methodologies face [...] Read more.
Unmanned Aerial Vehicles (UAVs) are widely used in meteorology and logistics due to their unique advantages nowadays. During their lifecycle, uncertainties—such as flight condition variations—can significantly affect both design and performance, making Robust Aerodynamic Design Optimization (RADO) essential. However, existing RADO methodologies face high computational cost of uncertainty analysis and inefficiency of conventional optimization algorithms. To address these challenges, this paper proposed a novel RADO methodology integrating a Stochastic Kriging (SK) surrogate model with the PPO-Clip reinforcement learning algorithm, targeting atmospheric uncertainties encountered by turbojet-powered UAVs in transonic cruise. The SK surrogate model, constructed via Maximin Latin Hypercube Sampling and refined using the Expected Improvement infill criterion, enabled efficient uncertainty quantification. Based on the trained surrogate model, a PPO-Clip-based RADO framework with tailored reward and state transition functions was established. Applied to the RAE2822 airfoil under Mach number perturbations, the methodology demonstrated superior reliability and efficiency compared with L-BFGS-B and PSO algorithms. Full article
Show Figures

Figure 1

11 pages, 1896 KB  
Article
Real-Time Cell Gap Estimation in LC-Filled Devices Using Lightweight Neural Networks for Edge Deployment
by Chi-Yen Huang, You-Lun Zhang, Su-Yu Liao, Wen-Chun Huang, Jiann-Heng Chen, Bo-Chang Dong, Che-Ju Hsu and Chun-Ying Huang
Nanomaterials 2025, 15(16), 1289; https://doi.org/10.3390/nano15161289 - 21 Aug 2025
Viewed by 692
Abstract
Accurate determination of the liquid crystal (LC) cell gap after filling is essential for ensuring device performance in LC-based optical applications. However, the introduction of birefringent materials significantly distorts the transmission spectrum, complicating traditional optical analysis. In this work, we propose a lightweight [...] Read more.
Accurate determination of the liquid crystal (LC) cell gap after filling is essential for ensuring device performance in LC-based optical applications. However, the introduction of birefringent materials significantly distorts the transmission spectrum, complicating traditional optical analysis. In this work, we propose a lightweight machine learning framework using a shallow multilayer perceptron (MLP) to estimate the cell gap directly from the transmission spectrum of filled LC cells. The model was trained on experimentally acquired spectra with peak-to-peak interferometry-derived ground truth values. We systematically evaluated different optimization algorithms, activation functions, and hidden neuron configurations to identify an optimal model setting that balances prediction accuracy and computational simplicity. The best-performing model, using exponential activation with eight hidden units and BFGS optimization, achieved a correlation coefficient near 1 and an RMSE below 0.1 μm across multiple random seeds and training–test splits. The model was successfully deployed on a Raspberry Pi 4, demonstrating real-time inference with low latency, memory usage, and power consumption. These results validate the feasibility of portable, edge-based LC inspection systems for in situ diagnostics and quality control. Full article
Show Figures

Figure 1

22 pages, 3665 KB  
Article
Comparative Study of Linear and Non-Linear ML Algorithms for Cement Mortar Strength Estimation
by Sebghatullah Jueyendah, Zeynep Yaman, Turgay Dere and Türker Fedai Çavuş
Buildings 2025, 15(16), 2932; https://doi.org/10.3390/buildings15162932 - 19 Aug 2025
Cited by 2 | Viewed by 509
Abstract
The compressive strength (Fc) of cement mortar (CM) is a key parameter in ensuring the mechanical reliability and durability of cement-based materials. Traditional testing methods are labor-intensive, time-consuming, and often lack predictive flexibility. With the increasing adoption of machine learning (ML) in civil [...] Read more.
The compressive strength (Fc) of cement mortar (CM) is a key parameter in ensuring the mechanical reliability and durability of cement-based materials. Traditional testing methods are labor-intensive, time-consuming, and often lack predictive flexibility. With the increasing adoption of machine learning (ML) in civil engineering, data-driven approaches offer a rapid, cost-effective alternative for forecasting material properties. This study investigates a wide range of supervised linear and nonlinear ML regression models to predict the Fc of CM. The evaluated models include linear regression, ridge regression, lasso regression, decision trees, random forests, gradient boosting, k-nearest neighbors (KNN), and twelve neural network (NN) architectures, developed by combining different optimizers (L-BFGS, Adam, and SGD) with activation functions (tanh, relu, logistic, and identity). Model performance was assessed using the root mean squared error (RMSE), coefficient of determination (R2), and mean absolute error (MAE). Among all models, NN_tanh_lbfgs achieved the best results, with an almost perfect fit in training (R2 = 0.9999, RMSE = 0.0083, MAE = 0.0063) and excellent generalization in testing (R2 = 0.9946, RMSE = 1.5032, MAE = 1.2545). NN_logistic_lbfgs, gradient boosting, and NN_relu_lbfgs also exhibited high predictive accuracy and robustness. The SHAP analysis revealed that curing age and nano silica/cement ratio (NS/C) positively influence Fc, while porosity has the strongest negative impact. The main novelty of this study lies in the systematic tuning of neural networks via distinct optimizer–activation combinations, and the integration of SHAP for interpretability—bridging the gap between predictive performance and explainability in cementitious materials research. These results confirm the NN_tanh_lbfgs as a highly reliable model for estimating Fc in CM, offering a robust, interpretable, and scalable solution for data-driven strength prediction. Full article
(This article belongs to the Special Issue Advanced Research on Concrete Materials in Construction)
Show Figures

Figure 1

15 pages, 656 KB  
Article
Green Technology Game and Data-Driven Parameter Identification in the Digital Economy
by Xiaofeng Li and Qun Zhao
Mathematics 2025, 13(14), 2302; https://doi.org/10.3390/math13142302 - 18 Jul 2025
Viewed by 326
Abstract
The digital economy presents multiple challenges to the promotion of green technologies, including behavioral uncertainty among firms, heterogeneous technological choices, and disparities in policy incentive strength. This study develops a tripartite evolutionary game model encompassing government, production enterprises, and technology suppliers to systematically [...] Read more.
The digital economy presents multiple challenges to the promotion of green technologies, including behavioral uncertainty among firms, heterogeneous technological choices, and disparities in policy incentive strength. This study develops a tripartite evolutionary game model encompassing government, production enterprises, and technology suppliers to systematically explore the strategic evolution mechanisms underlying green technology adoption. A three-dimensional nonlinear dynamic system is constructed using replicator dynamics, and the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is applied to identify key cost and benefit parameters for firms. Simulation results exhibit a strong match between the estimated parameters and simulated data, highlighting the model’s identifiability and explanatory capacity. In addition, the stability of eight pure strategy equilibrium points is examined through Jacobian analysis, revealing the evolutionary trajectories and local stability features across various strategic configurations. These findings offer theoretical guidance for optimizing green policy design and identifying behavioral pathways, while establishing a foundation for data-driven modeling of dynamic evolutionary processes. Full article
(This article belongs to the Special Issue Dynamic Analysis and Decision-Making in Complex Networks)
Show Figures

Figure 1

30 pages, 3453 KB  
Article
Addressing Weather Data Gaps in Reference Crop Evapotranspiration Estimation: A Case Study in Guinea-Bissau, West Africa
by Gabriel Garbanzo, Jesus Céspedes, Marina Temudo, Tiago B. Ramos, Maria do Rosário Cameira, Luis Santos Pereira and Paula Paredes
Hydrology 2025, 12(7), 161; https://doi.org/10.3390/hydrology12070161 - 22 Jun 2025
Viewed by 1010
Abstract
Crop water use (ETc) is typically estimated as the product of crop evapotranspiration (ETo) and a crop coefficient (Kc). However, the estimation of ETo requires various meteorological data, which are often unavailable or of poor quality, [...] Read more.
Crop water use (ETc) is typically estimated as the product of crop evapotranspiration (ETo) and a crop coefficient (Kc). However, the estimation of ETo requires various meteorological data, which are often unavailable or of poor quality, particularly in countries such as Guinea-Bissau, where the maintenance of weather stations is frequently inadequate. The present study aimed to assess alternative approaches, as outlined in the revised FAO56 guidelines, for estimating ETo when only temperature data is available. These included the use of various predictors for the missing climatic variables, referred to as the Penman–Monteith temperature (PMT) approach. New approaches were developed, with a particular focus on optimizing the predictors at the cluster level. Furthermore, different gridded weather datasets (AgERA5 and MERRA-2 reanalysis) were evaluated for ETo estimation to overcome the lack of ground-truth data and upscale ETo estimates from point to regional and national levels, thereby supporting water management decision-making. The results demonstrate that the PMT is generally accurate, with RMSE not exceeding 26% of the average daily ETo. With regard to shortwave radiation, using the temperature difference as a predictor in combination with cluster-focused multiple linear regression equations for estimating the radiation adjustment coefficient (kRs) yielded accurate results. ETo estimates derived using raw (uncorrected) reanalysis data exhibit considerable bias and high RMSE (1.07–1.57 mm d−1), indicating the need for bias correction. Various correction methods were tested, with the simple bias correction delivering the best overall performance, reducing RMSE to 0.99 mm d−1 and 1.05 mm d−1 for AgERA5 and MERRA-2, respectively, and achieving a normalized RMSE of about 22%. After implementing bias correction, the AgERA5 was found to be superior to the MERRA-2 for all the studied sites. Furthermore, the PMT outperformed the bias-corrected reanalysis in estimating ETo. It was concluded that PMT-ETo can be recommended for further application in countries with limited access to ground-truth meteorological data, as it requires only basic technical skills. It can also be used alongside reanalysis data, which demands more advanced expertise, particularly for data retrieval and processing. Full article
Show Figures

Figure 1

23 pages, 4919 KB  
Article
Hybrid Symbolic Regression and Machine Learning Approaches for Modeling Gas Lift Well Performance
by Samuel Nashed and Rouzbeh Moghanloo
Fluids 2025, 10(7), 161; https://doi.org/10.3390/fluids10070161 - 21 Jun 2025
Cited by 1 | Viewed by 1181
Abstract
Proper determination of the bottomhole pressure in a gas lift well is essential to enhance production, tackle operating concerns, and use the least amount of gas. Mechanistic models, empirical correlation, and hybrid models are usually limited by the requirements for calibration, large amounts [...] Read more.
Proper determination of the bottomhole pressure in a gas lift well is essential to enhance production, tackle operating concerns, and use the least amount of gas. Mechanistic models, empirical correlation, and hybrid models are usually limited by the requirements for calibration, large amounts of inputs, or limited scope of work. Through this study, sixteen well-tested machine learning (ML) models, such as genetic programming-based symbolic regression and neural networks, are developed and studied to accurately predict flowing BHP at the perforation depth, using a dataset from 304 gas lift wells. The dataset covers a variety of parameters related to reservoirs, completions, and operations. After careful preprocessing and analysis of features, the models were prepared and tested with cross-validation, random sampling, and blind testing. Among all approaches, using the L-BFGS optimizer on the neural network gave the best predictions, with an R2 of 0.97, low errors, and better accuracy than other ML methods. Upon using SHAP analysis, it was found that the injection point depth, tubing depth, and fluid flow rate are the main determining factors. Further using the model on 30 unseen additional wells confirmed its reliability and real-world utility. This study reveals that ML prediction for BHP is an effective alternative for traditional models and pressure gauges, as it is simpler, quicker, more accurate, and more economical. Full article
(This article belongs to the Special Issue Advances in Multiphase Flow Simulation with Machine Learning)
Show Figures

Figure 1

24 pages, 1258 KB  
Article
Enhancing Ability Estimation with Time-Sensitive IRT Models in Computerized Adaptive Testing
by Ahmet Hakan İnce and Serkan Özbay
Appl. Sci. 2025, 15(13), 6999; https://doi.org/10.3390/app15136999 - 21 Jun 2025
Viewed by 1133
Abstract
This study investigates the impact of response time on ability estimation within an Item Response Theory (IRT) framework, introducing time-sensitive formulations to enhance student assessment accuracy. Seven models were evaluated, including standard 1PL-IRT and six response-time-adjusted variants: TP-IRT, STP-IRT, TWD-IRT, NRT-IRT, DTA-IRT, and [...] Read more.
This study investigates the impact of response time on ability estimation within an Item Response Theory (IRT) framework, introducing time-sensitive formulations to enhance student assessment accuracy. Seven models were evaluated, including standard 1PL-IRT and six response-time-adjusted variants: TP-IRT, STP-IRT, TWD-IRT, NRT-IRT, DTA-IRT, and ART-IRT. Three optimization techniques—Maximum Likelihood Estimation (MLE), full parameter optimization, and K-fold Cross-Validation (CV)—were employed to assess model performance. Empirical validation was conducted using data from 150 students solving 30 mathematics items on the “TestYourself” platform, integrating response accuracy and timing metrics. Student abilities (θ), item difficulties (b), and time–effect parameters (λ) were estimated using the L-BFGS-B algorithm to ensure numerical stability. The results indicate that subtractive models, particularly DTA-IRT, achieved the lowest AIC/BIC values, highest AUC, and improved parameter stability, confirming their effectiveness in penalizing excessive response times without disproportionately affecting moderate-speed students. In contrast, multiplicative models (TWD-IRT, ART-IRT) exhibited higher variability, weaker generalizability, and increased instability, raising concerns about their applicability in adaptive testing. K-fold CV further validated the robustness of subtractive models, emphasizing their suitability for real-world assessments. These findings highlight the importance of incorporating response time as an additive factor to improve ability estimation while maintaining fairness and interpretability. Future research should explore multidimensional IRT extensions, behavioral response–time analysis, and adaptive testing environments that dynamically adjust item difficulty based on response behavior. Full article
(This article belongs to the Special Issue Applications of Smart Learning in Education)
Show Figures

Figure 1

27 pages, 5926 KB  
Article
Evaluation of Machine Learning Models for Enhancing Sustainability in Additive Manufacturing
by Waqar Shehbaz and Qingjin Peng
Technologies 2025, 13(6), 228; https://doi.org/10.3390/technologies13060228 - 3 Jun 2025
Viewed by 1031
Abstract
Additive manufacturing (AM) presents significant opportunities for advancing sustainability through optimized process control and material utilization. This research investigates the application of machine learning (ML) models to directly associate AM process parameters with sustainability metrics, which is often a challenge by experimental methods [...] Read more.
Additive manufacturing (AM) presents significant opportunities for advancing sustainability through optimized process control and material utilization. This research investigates the application of machine learning (ML) models to directly associate AM process parameters with sustainability metrics, which is often a challenge by experimental methods alone. Initially, experimental data are generated by systematically varying key AM parameters, layer height, infill density, infill pattern, build orientation, and number of shells. Subsequently, four ML models, Linear Regression, Decision Trees, Random Forest, and Gradient Boosting, are trained and evaluated. Hyperparameter tuning is conducted using the Limited-memory Broyden–Fletcher–Goldfarb–Shanno with Box constraints (L-BFGS-B) algorithm, which demonstrates the superior computational efficiency compared to traditional approaches such as grid and random search. Among the models, Random Forest yields the highest predictive accuracy and lowest mean squared error across all target sustainability indicators: energy consumption, part weight, scrap weight, and production time. The results confirm the efficacy of ML in predicting sustainability outcomes when supported by robust experimental data. This research offers a scalable and computationally efficient approach to enhancing sustainability in AM processes and contributes to data-driven decision-making in sustainable manufacturing. Full article
Show Figures

Graphical abstract

16 pages, 4254 KB  
Article
Robust Parameter Inversion and Subsidence Prediction for Probabilistic Integral Methods in Mining Areas
by Xinjian Fang, Rui Yang, Mingfei Zhu, Jinling Duan and Shenshen Chi
Appl. Sci. 2025, 15(11), 5849; https://doi.org/10.3390/app15115849 - 23 May 2025
Viewed by 434
Abstract
Surface subsidence induced by coal mining poses severe threats to global ecosystems and infrastructure. A critical challenge in subsidence prediction lies in the sensitivity of existing probabilistic integral parameter inversion methods to gross errors, leading to unstable predictions and compromised reliability. To address [...] Read more.
Surface subsidence induced by coal mining poses severe threats to global ecosystems and infrastructure. A critical challenge in subsidence prediction lies in the sensitivity of existing probabilistic integral parameter inversion methods to gross errors, leading to unstable predictions and compromised reliability. To address this limitation, we propose the IGGIII-BFGS algorithm that integrates robust estimation with unconstrained optimization method, enhancing resistance to gross errors during parameter inversion. Through systematic comparison of four robust estimation methods (Huber, L1, Geman–McClure, IGGIII) fused with BFGS, the IGGIII-BFGS method demonstrated superior stability and accuracy, reducing relative errors in key parameters (subsidence factor q, horizontal displacement coefficient b, and tangent of major influence angle tan β) to near-zero levels. Validation on the Huainan mining case study showed that the IGGIII-BFGS method achieved a 25.8% reduction in subsidence RMSE compared to standard BFGS, with predicted curves exhibiting strong agreement with field measurements. This advancement enables precise forecasting of subsidence and horizontal displacement, which hold significant value for the sustainable development of the surface ecological environment and social stability. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

23 pages, 2319 KB  
Article
Codesign of Transmit Waveform and Receive Filter with Similarity Constraints for FDA-MIMO Radar
by Qiping Zhang, Jinfeng Hu, Xin Tai, Yongfeng Zuo, Huiyong Li, Kai Zhong and Chaohai Li
Remote Sens. 2025, 17(10), 1800; https://doi.org/10.3390/rs17101800 - 21 May 2025
Viewed by 645
Abstract
The codesign of the receive filter and transmit waveform under similarity constraints is one of the key technologies in frequency diverse array multiple-input multiple-output (FDA-MIMO) radar systems. This paper discusses the design of constant modulus waveforms and filters aimed at maximizing the signal-to-interference-and-noise [...] Read more.
The codesign of the receive filter and transmit waveform under similarity constraints is one of the key technologies in frequency diverse array multiple-input multiple-output (FDA-MIMO) radar systems. This paper discusses the design of constant modulus waveforms and filters aimed at maximizing the signal-to-interference-and-noise ratio (SINR). The problem’s non-convexity renders it challenging to solve. Existing studies have typically employed relaxation-based methods, which inevitably introduce relaxation errors that degrade system performance. To address these issues, we propose an optimization framework based on the joint complex circle manifold–complex sphere manifold space (JCCM-CSMS). Firstly, the similarity constraint is converted into the penalty term in the objective function using an adaptive penalty strategy. Then, JCCM-CSMS is constructed to satisfy the waveform constant modulus constraint and filter norm constraint. The problem is projected into it and transformed into an unconstrained optimization problem. Finally, the Riemannian limited-memory Broyden–Fletcher–Goldfarb–Shanno (RL-BFGS) algorithm is employed to optimize the variables in parallel. Simulation results demonstrate that our method achieves a 0.6 dB improvement in SINR compared to existing methods while maintaining competitive computational efficiency. Additionally, waveform similarity was also analyzed. Full article
(This article belongs to the Special Issue Array Digital Signal Processing for Radar)
Show Figures

Graphical abstract

20 pages, 4751 KB  
Article
Recovery and Characterization of Tissue Properties from Magnetic Resonance Fingerprinting with Exchange
by Naren Nallapareddy and Soumya Ray
J. Imaging 2025, 11(5), 169; https://doi.org/10.3390/jimaging11050169 - 20 May 2025
Viewed by 587
Abstract
Magnetic resonance fingerprinting (MRF), a quantitative MRI technique, enables the acquisition of multiple tissue properties in a single scan. In this paper, we study a proposed extension of MRF, MRF with exchange (MRF-X), which can enable acquisition of the six tissue properties [...] Read more.
Magnetic resonance fingerprinting (MRF), a quantitative MRI technique, enables the acquisition of multiple tissue properties in a single scan. In this paper, we study a proposed extension of MRF, MRF with exchange (MRF-X), which can enable acquisition of the six tissue properties T1a,T2a, T1b, T2b, ρ and τ simultaneously. In MRF-X, ‘a’ and ‘b’ refer to distinct compartments modeled in each voxel, while ρ is the fractional volume of component ‘a’, and τ is the exchange rate of protons between the two components. To assess the feasibility of recovering these properties, we first empirically characterize a similarity metric between MRF and MRF-X reconstructed tissue property values and known reference property values for candidate signals. Our characterization indicates that such a recovery is possible, although the similarity metric surface across the candidate tissue properties is less structured for MRF-X than for MRF. We then investigate the application of different optimization techniques to recover tissue properties from noisy MRF and MRF-X data. Previous work has widely utilized template dictionary-based approaches in the context of MRF; however, such approaches are infeasible with MRF-X. Our results show that Simplicial Homology Global Optimization (SHGO), a global optimization algorithm, and Limited-memory Bryoden–Fletcher–Goldfarb–Shanno algorithm with Bounds (L-BFGS-B), a local optimization algorithm, performed comparably with direct matching in two-tissue property MRF at an SNR of 5. These optimization methods also successfully recovered five tissue properties from MRF-X data. However, with the current pulse sequence and reconstruction approach, recovering all six tissue properties remains challenging for all the methods investigated. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

27 pages, 18646 KB  
Article
Enhancing Extreme Learning Machine Robustness via Residual-Variance-Aware Dynamic Weighting and Broyden–Fletcher–Goldfarb–Shanno Optimization: Application to Metro Crowd Flow Prediction
by Lihui Wang and Jianguang Xie
Systems 2025, 13(5), 349; https://doi.org/10.3390/systems13050349 - 3 May 2025
Cited by 1 | Viewed by 713
Abstract
Aiming at the robustness problem of the extreme learning machine (ELM) in noisy and nonuniform data scenarios, this paper proposes an improved algorithm (BFGS-URWELM) that integrates uniform residual weighting and Broyden–Fletcher–Goldfarb–Shanno (BFGS) quasi-Newton optimization. This method introduces a sample weighting mechanism based on [...] Read more.
Aiming at the robustness problem of the extreme learning machine (ELM) in noisy and nonuniform data scenarios, this paper proposes an improved algorithm (BFGS-URWELM) that integrates uniform residual weighting and Broyden–Fletcher–Goldfarb–Shanno (BFGS) quasi-Newton optimization. This method introduces a sample weighting mechanism based on the target residual variance, dynamically adjusts the importance of training samples, and iteratively corrects the input weights and biases of the ELM in combination with the BFGS optimization strategy, effectively improving the prediction accuracy and stability of the model. The experiment is based on the passenger flow data of 80 subway stations and compares traditional machine learning algorithms, ensemble learning methods, and ELM variant models. The results show that BFGS-URWELM achieves 28.34, 0.3071, and 19.76 in the RMSE, MAPE, and MAE indicators, respectively, which are 19.9–33.5% higher than the baseline ELM. In addition, the residual distribution is more concentrated near the zero value, and the goodness of fit R2 is improved to 0.96. The algorithm significantly reduces the prediction error under high-noise data and provides a highly robust solution for traffic flow prediction tasks. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

Back to TopTop