Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (176)

Search Parameters:
Keywords = cold-start problem

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 23797 KB  
Article
Tone Mapping of HDR Images via Meta-Guided Bayesian Optimization and Virtual Diffraction Modeling
by Deju Huang, Xifeng Zheng, Jingxu Li, Ran Zhan, Jiachang Dong, Yuanyi Wen, Xinyue Mao, Yufeng Chen and Yu Chen
Sensors 2025, 25(21), 6577; https://doi.org/10.3390/s25216577 (registering DOI) - 25 Oct 2025
Viewed by 208
Abstract
This paper proposes a novel image tone-mapping framework that incorporates meta-learning, a psychophysical model, Bayesian optimization, and light-field virtual diffraction. First, we formalize the virtual diffraction process as a mathematical operator defined in the frequency domain to reconstruct high-dynamic-range (HDR) images through phase [...] Read more.
This paper proposes a novel image tone-mapping framework that incorporates meta-learning, a psychophysical model, Bayesian optimization, and light-field virtual diffraction. First, we formalize the virtual diffraction process as a mathematical operator defined in the frequency domain to reconstruct high-dynamic-range (HDR) images through phase modulation, enabling the precise control of image details and contrast. In parallel, we apply the Stevens power law to simulate the nonlinear luminance perception of the human visual system, thereby adjusting the overall brightness distribution of the HDR image and improving the visual experience. Unlike existing methods that primarily emphasize structural fidelity, the proposed method strikes a balance between perceptual fidelity and visual naturalness. Secondly, an adaptive parameter tuning system based on Bayesian optimization is developed to conduct optimization of the Tone Mapping Quality Index (TMQI), quantifying uncertainty using probabilistic models to approximate the global optimum with fewer evaluations. Furthermore, we propose a task-distribution-oriented meta-learning framework: a meta-feature space based on image statistics is constructed, and task clustering is combined with a gated meta-learner to rapidly predict initial parameters. This approach significantly enhances the robustness of the algorithm in generalizing to diverse HDR content and effectively mitigates the cold-start problem in the early stage of Bayesian optimization, thereby accelerating the convergence of the overall optimization process. Experimental results demonstrate that the proposed method substantially outperforms state-of-the-art tone-mapping algorithms across multiple benchmark datasets, with an average improvement of up to 27% in naturalness. Furthermore, the meta-learning-guided Bayesian optimization achieves two- to five-fold faster convergence. In the trade-off between computational time and performance, the proposed method consistently dominates the Pareto frontier, achieving high-quality results and efficient convergence with a low computational cost. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

33 pages, 2812 KB  
Article
A Symmetry-Aware Predictive Framework for Olympic Cold-Start Problems and Rare Events Based on Multi-Granularity Transfer Learning and Extreme Value Analysis
by Yanan Wang, Yi Fei and Qiuyan Zhang
Symmetry 2025, 17(11), 1791; https://doi.org/10.3390/sym17111791 - 23 Oct 2025
Viewed by 217
Abstract
This paper addresses the cold-start problem and rare event prediction challenges in Olympic medal forecasting by proposing a predictive framework that integrates multi-granularity transfer learning with extreme value theory. The framework comprises two main components, a Multi-Granularity Transfer Learning Core (MG-TLC) and a [...] Read more.
This paper addresses the cold-start problem and rare event prediction challenges in Olympic medal forecasting by proposing a predictive framework that integrates multi-granularity transfer learning with extreme value theory. The framework comprises two main components, a Multi-Granularity Transfer Learning Core (MG-TLC) and a Rare Event Analysis Module (RE-AM), which address multi-level prediction for data-scarce countries and first medal prediction tasks. The MG-TLC incorporates two key components: Dynamic Feature Space Reconstruction (DFSR) and the Hierarchical Adaptive Transfer Strategy (HATS). The RE-AM combines a Bayesian hierarchical extreme value model (BHEV) with piecewise survival analysis (PSA). Experiments based on comprehensive, licensed Olympic data from 1896–2024, where the framework was trained on data up to 2016, validated on the 2020 Games, and tested by forecasting the 2024 Games, demonstrate that the proposed framework significantly outperforms existing methods, reducing MAE by 25.7% for data-scarce countries and achieving an AUC of 0.833 for first medal prediction, 14.3% higher than baseline methods. This research establishes a foundation for predicting the 2028 Los Angeles Olympics and provides new approaches for cold-start and rare event prediction, with potential applicability to similar challenges in other data-scarce domains such as economics or public health. From a symmetry viewpoint, our framework is designed to preserve task-relevant invariances—permutation invariance in set-based country aggregation and scale robustness to macro-covariate units—via distributional alignment between data-rich and data-scarce domains and Olympic-cycle indexing. We treat departures from these symmetries (e.g., host advantage or event-program changes) as structured asymmetries and capture them with a rare event module that combines extreme value and survival modeling. Full article
(This article belongs to the Special Issue Applications Based on Symmetry in Machine Learning and Data Mining)
Show Figures

Figure 1

27 pages, 761 KB  
Article
A Novel Framework Leveraging Social Media Insights to Address the Cold-Start Problem in Recommendation Systems
by Enes Celik and Sevinc Ilhan Omurca
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 234; https://doi.org/10.3390/jtaer20030234 - 2 Sep 2025
Viewed by 1032
Abstract
In today’s world, with rapidly developing technology, it has become possible to perform many transactions over the internet. Consequently, providing better service to online customers in every field has become a crucial task. These advancements have driven companies and sellers to recommend tailored [...] Read more.
In today’s world, with rapidly developing technology, it has become possible to perform many transactions over the internet. Consequently, providing better service to online customers in every field has become a crucial task. These advancements have driven companies and sellers to recommend tailored products to their customers. Recommendation systems have emerged as a field of study to ensure that relevant and suitable products can be presented to users. One of the major challenges in recommendation systems is the cold-start problem, which arises when there is insufficient information about a newly introduced user or product. To address this issue, we propose a novel framework that leverages implicit behavioral insights from users’ X social media activity to construct personalized profiles without requiring explicit user input. In the proposed model, users’ behavioral profiles are first derived from their social media data. Then, recommendation lists are generated to address the cold-start problem by employing Boosting algorithms. The framework employs six boosting algorithms to classify user preferences for the top 20 most-rated films on Letterboxd. In this way, a solution is offered without requiring any additional external data beyond social media information. Experiments on a dataset demonstrate that CatBoost outperforms other methods, achieving an F1-score of 0.87 and MAE of 0.21. Based on experimental results, the proposed system outperforms existing methods developed to solve the cold-start problem. Full article
Show Figures

Figure 1

26 pages, 2199 KB  
Article
A Deep-Learning-Based Dynamic Multidimensional Memory-Augmented Personalized Recommendation Research
by Peihua Xu and Maoyuan Zhang
Appl. Sci. 2025, 15(17), 9597; https://doi.org/10.3390/app15179597 - 31 Aug 2025
Viewed by 542
Abstract
To address the problem of inaccurate matching between personalized exercise recommendations and learners’ mastery of knowledge concepts/learning abilities, we propose the Dynamic Multidimensional Memory Augmented knowledge tracing model (DMMA). This model integrates a dynamic key-value memory neural network with the Ebbinghaus Forgetting Curve. [...] Read more.
To address the problem of inaccurate matching between personalized exercise recommendations and learners’ mastery of knowledge concepts/learning abilities, we propose the Dynamic Multidimensional Memory Augmented knowledge tracing model (DMMA). This model integrates a dynamic key-value memory neural network with the Ebbinghaus Forgetting Curve. By incorporating time decay factors and knowledge concept mastery speed factors, it dynamically adjusts knowledge update intensity, effectively resolving the insufficient personalized recommendation capabilities of traditional models. Experimental validation demonstrates its effectiveness: on Algebra 2006–2007, DMMA achieves 82% accuracy, outperforming CRDP-KT by 6%, while maintaining 53–55% accuracy for cold-start users (0–5 interactions), which is 25% higher than CoKT. The model’s integration of the Ebbinghaus forgetting curve and K-means-based concept classification enhances adaptability. Genetic algorithm optimization yields a diversity score of 0.79, with 18% higher 30-day knowledge retention. The FastDTW–Sigmoid hybrid similarity calculation (weight transition 0.27–0.88) ensures smooth cold-start adaptation, while novelty metrics reach 0.65 via random-forest-driven prediction. Ablation studies confirm component necessity: removing time decay factors reduces accuracy by 2.2%. These results validate DMMA’s superior performance in personalized education. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

28 pages, 2462 KB  
Article
A Service Recommendation Model in Cloud Environment Based on Trusted Graph-Based Collaborative Filtering Recommender System
by Urvashi Rahul Saxena, Yogita Khatri, Rajan Kadel and Samar Shailendra
Network 2025, 5(3), 30; https://doi.org/10.3390/network5030030 - 13 Aug 2025
Viewed by 568
Abstract
Cloud computing has increasingly adopted multi-tenant infrastructures to enhance cost efficiency and resource utilization by enabling the shared use of computational resources. However, this shared model introduces several security and privacy concerns, including unauthorized access, data redundancy, and susceptibility to malicious activities. In [...] Read more.
Cloud computing has increasingly adopted multi-tenant infrastructures to enhance cost efficiency and resource utilization by enabling the shared use of computational resources. However, this shared model introduces several security and privacy concerns, including unauthorized access, data redundancy, and susceptibility to malicious activities. In such environments, the effectiveness of cloud-based recommendation systems largely depends on the trustworthiness of participating nodes. Traditional collaborative filtering techniques often suffer from limitations such as data sparsity and the cold-start problem, which significantly degrade rating prediction accuracy. To address these challenges, this study proposes a Trusted Graph-Based Collaborative Filtering Recommender System (TGBCF). The model integrates graph-based trust relationships with collaborative filtering to construct a trust-aware user network capable of generating reliable service recommendations. Each node’s reliability is quantitatively assessed using a trust metric, thereby improving both the accuracy and robustness of the recommendation process. Simulation results show that TGBCF achieves a rating prediction accuracy of 93%, outperforming the baseline collaborative filtering approach (82%). Moreover, the model reduces the influence of malicious nodes by 40–60%, demonstrating its applicability in dynamic and security-sensitive cloud service environments. Full article
Show Figures

Figure 1

21 pages, 655 KB  
Article
A Novel Framework Leveraging Large Language Models to Enhance Cold-Start Advertising Systems
by Albin Uruqi, Iosif Viktoratos and Athanasios Tsadiras
Future Internet 2025, 17(8), 360; https://doi.org/10.3390/fi17080360 - 8 Aug 2025
Viewed by 1266
Abstract
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy, [...] Read more.
The cold-start problem remains a critical challenge in personalized advertising, where users with limited or no interaction history often receive suboptimal recommendations. This study introduces a novel, three-stage framework that systematically integrates transformer architectures and large language models (LLMs) to improve recommendation accuracy, transparency, and user experience throughout the entire advertising pipeline. The proposed approach begins with transformer-enhanced feature extraction, leveraging self-attention and learned positional encodings to capture deep semantic relationships among users, ads, and context. It then employs an ensemble integration strategy combining enhanced state-of-the-art models with optimized aggregation for robust prediction. Finally, an LLM-driven enhancement module performs semantic reranking, personalized message refinement, and natural language explanation generation while also addressing cold-start scenarios through pre-trained knowledge. The LLM component further supports diversification, fairness-aware ranking, and sentiment sensitivity in order to ensure more relevant, diverse, and ethically grounded recommendations. Extensive experiments on DigiX and Avazu datasets demonstrate notable gains in click-through rate prediction (CTR), while an in-depth real user evaluation showcases improvements in perceived ad relevance, message quality, transparency, and trust. This work advances the state-of-the-art by combining CTR models with interpretability and contextual reasoning. The strengths of the proposed method, such as its innovative integration of components, empirical validation, multifaceted LLM application, and ethical alignment highlight its potential as a robust, future-ready solution for personalized advertising. Full article
(This article belongs to the Special Issue Information Networks with Human-Centric LLMs)
Show Figures

Figure 1

26 pages, 453 KB  
Article
Trend-Enabled Recommender System with Diversity Enhancer for Crop Recommendation
by Iulia Baraian, Rudolf Erdei, Rares Tamaian, Daniela Delinschi, Emil Marian Pasca and Oliviu Matei
Agriculture 2025, 15(15), 1614; https://doi.org/10.3390/agriculture15151614 - 25 Jul 2025
Viewed by 563
Abstract
Achieving optimal agricultural yields and promoting sustainable farming relies on accurate crop recommendations. However, the applicability of many current systems is limited by their considerable computational requirements and dependence on comprehensive datasets, especially in resource-limited contexts. This paper presents HOLISTIQ RS, a novel [...] Read more.
Achieving optimal agricultural yields and promoting sustainable farming relies on accurate crop recommendations. However, the applicability of many current systems is limited by their considerable computational requirements and dependence on comprehensive datasets, especially in resource-limited contexts. This paper presents HOLISTIQ RS, a novel crop recommendation system explicitly designed for operation on low-specification hardware and in data-scarce regions. HOLISTIQ RS combines collaborative filtering with a Markov model to predict appropriate crop choices, drawing upon user profiles, regional agricultural data, and past crop performance. Results indicate that HOLISTIQ RS provides a significant increase in recommendation accuracy, achieving a MAP@5 of 0.31 and nDCG@5 of 0.41, outperforming standard collaborative filtering methods (the KNN achieved MAP@5 of 0.28 and nDCG@5 of 0.38, and the ANN achieved MAP@5 of 0.25 and nDCG@5 of 0.35). Significantly, the system also demonstrates enhanced recommendation diversity, achieving an Item Variety (IV@5) of 23%, which is absent in deterministic baselines. Significantly, the system is engineered for reduced energy consumption and can be deployed on low-cost hardware. This provides a feasible and adaptable method for encouraging informed decision-making and promoting sustainable agricultural practices in areas where resources are constrained, with an emphasis on lower energy usage. Full article
(This article belongs to the Section Agricultural Systems and Management)
Show Figures

Figure 1

28 pages, 2181 KB  
Article
Novel Models for the Warm-Up Phase of Recommendation Systems
by Nourah AlRossais
Computers 2025, 14(8), 302; https://doi.org/10.3390/computers14080302 - 24 Jul 2025
Cited by 1 | Viewed by 709
Abstract
In the recommendation system (RS) literature, a distinction exists between studies dedicated to fully operational (known users/items) and cold-start (new users/items) RSs. The warm-up phase—the transition between the two—is not widely researched, despite evidence that attrition rates are the highest for users and [...] Read more.
In the recommendation system (RS) literature, a distinction exists between studies dedicated to fully operational (known users/items) and cold-start (new users/items) RSs. The warm-up phase—the transition between the two—is not widely researched, despite evidence that attrition rates are the highest for users and content providers during such periods. RS formulations, particularly deep learning models, do not easily allow for a warm-up phase. Herein, we propose two independent and complementary models to increase RS performance during the warm-up phase. The models apply to any cold-start RS expressible as a function of all user features, item features, and existing users’ preferences for existing items. We demonstrate substantial improvements: Accuracy-oriented metrics improved by up to 14% compared with not handling warm-up explicitly. Non-accuracy-oriented metrics, including serendipity and fairness, improved by up to 12% compared with not handling warm-up explicitly. The improvements were independent of the cold-start RS algorithm. Additionally, this paper introduces a method of examining the performance metrics of an RS during the warm-up phase as a function of the number of user–item interactions. We discuss problems such as data leakage and temporal consistencies of training/testing—often neglected during the offline evaluation of RSs. Full article
Show Figures

Figure 1

20 pages, 709 KB  
Article
SKGRec: A Semantic-Enhanced Knowledge Graph Fusion Recommendation Algorithm with Multi-Hop Reasoning and User Behavior Modeling
by Siqi Xu, Ziqian Yang, Jing Xu and Ping Feng
Computers 2025, 14(7), 288; https://doi.org/10.3390/computers14070288 - 18 Jul 2025
Viewed by 604
Abstract
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction [...] Read more.
To address the limitations of existing knowledge graph-based recommendation algorithms, including insufficient utilization of semantic information and inadequate modeling of user behavior motivations, we propose SKGRec, a novel recommendation model that integrates knowledge graph and semantic features. The model constructs a semantic interaction graph (USIG) of user behaviors and employs a self-attention mechanism and a ranked optimization loss function to mine user interactions in fine-grained semantic associations. A relationship-aware aggregation module is designed to dynamically integrate higher-order relational features in the knowledge graph through the attention scoring function. In addition, a multi-hop relational path inference mechanism is introduced to capture long-distance dependencies to improve the depth of user interest modeling. Experiments on the Amazon-Book and Last-FM datasets show that SKGRec significantly outperforms several state-of-the-art recommendation algorithms on the Recall@20 and NDCG@20 metrics. Comparison experiments validate the effectiveness of semantic analysis of user behavior and multi-hop path inference, while cold-start experiments further confirm the robustness of the model in sparse-data scenarios. This study provides a new optimization approach for knowledge graph and semantic-driven recommendation systems, enabling more accurate capture of user preferences and alleviating the problem of noise interference. Full article
Show Figures

Figure 1

25 pages, 3974 KB  
Article
The Hybrid Model: Prediction-Based Scheduling and Efficient Resource Management in a Serverless Environment
by Louai Shiekhani, Hui Wang, Wen Shi, Jiahao Liu, Yuan Qiu, Chunhua Gu and Weichao Ding
Appl. Sci. 2025, 15(14), 7632; https://doi.org/10.3390/app15147632 - 8 Jul 2025
Viewed by 984
Abstract
Serverless computing has gained significant attention in recent years. However, the cold start problem remains a major challenge, not only because of the substantial latency it introduces to function execution time, but also because frequent cold starts lead to poor resource utilization, especially [...] Read more.
Serverless computing has gained significant attention in recent years. However, the cold start problem remains a major challenge, not only because of the substantial latency it introduces to function execution time, but also because frequent cold starts lead to poor resource utilization, especially during workload fluctuations. To address these issues, we propose a multi-level scheduling solution: the Hybrid Model. This model is designed to reduce the frequency of cold starts while maximizing container utilization. At the global level (across invokers), the Hybrid Model employs a skewness-aware scheduling strategy to select the most appropriate invoker for each request. Within each invoker, we introduce a greedy buffer-aware scheduling method that leverages the available slack (remaining buffer) of warm containers to aggressively encourage their reuse. Both the global and the local schedule are tightly integrated with a prediction component- The Hybrid Predictor- that combines Auto-Regressive Integrated Moving Average ARIMA (linear trends) and Random Forest (non-linear residuals + environment-aware features) for 5-min workload forecasts. The Hybrid Model is implemented on Apache OpenWhisk and evaluated using Azure-like traces and real FaaS applications. The evaluations show that the Hybrid Model achieves up to 34% SLA violation reductions compared to three state-of-the-art approaches and maintains the container utilization to be more than 80%. Full article
(This article belongs to the Special Issue Advancements in Computer Systems and Operating Systems)
Show Figures

Figure 1

29 pages, 1602 KB  
Article
A Recommender System Model for Presentation Advisor Application Based on Multi-Tower Neural Network and Utility-Based Scoring
by Maria Vlahova-Takova and Milena Lazarova
Electronics 2025, 14(13), 2528; https://doi.org/10.3390/electronics14132528 - 22 Jun 2025
Viewed by 2127
Abstract
Delivering compelling presentations is a critical skill across academic, professional, and public domains—yet many presenters struggle with structuring content, maintaining visual consistency, and engaging their audience effectively. Existing tools offer isolated support for design or delivery but fail to promote long-term skill development. [...] Read more.
Delivering compelling presentations is a critical skill across academic, professional, and public domains—yet many presenters struggle with structuring content, maintaining visual consistency, and engaging their audience effectively. Existing tools offer isolated support for design or delivery but fail to promote long-term skill development. This paper presents a novel intelligent application, the Presentation Advisor application, powered by a personalized recommendation engine that goes beyond fixing slide content and visualization, enabling users to build presentation competence. The recommendation engine leverages a model based on hybrid multi-tower neural network architecture enhanced with temporal encoding, problem sequence modeling, and utility-based scoring to deliver adaptive context-aware feedback. Unlike current tools, the presented system analyzes user-submitted presentations to detect common issues and delivers curated educational content tailored to user preferences, presentation types, and audiences. The system also incorporates strategic cold-start mitigation, ensuring high-quality recommendations even for new users or unseen content. Comprehensive experimental evaluations demonstrate that the suggested model significantly outperforms content-based filtering, collaborative filtering, autoencoders, and reinforcement learning approaches across both accuracy and personalization metrics. By combining cutting-edge recommendation techniques with a pedagogical framework, the Presentation Advisor application enables users not only to improve individual presentations but to become consistently better presenters over time. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

42 pages, 2051 KB  
Article
Knowledge Bases and Representation Learning Towards Bug Triaging
by Qi Wang, Weihao Yan, Yanlong Li, Yizheng Ge, Yiwei Liu, Peng Yin and Shuai Tan
Mach. Learn. Knowl. Extr. 2025, 7(2), 57; https://doi.org/10.3390/make7020057 - 19 Jun 2025
Viewed by 1191
Abstract
A large number of bug reports are submitted by users and developers in bug-tracking system every day. It is time-consuming for software maintainers to assign bug reports to appropriate developers for fixing manually. Many bug-triaging methods have been developed to automate this process. [...] Read more.
A large number of bug reports are submitted by users and developers in bug-tracking system every day. It is time-consuming for software maintainers to assign bug reports to appropriate developers for fixing manually. Many bug-triaging methods have been developed to automate this process. However, most previous studies mainly focused on analyzing textual content and failed to make full use of the structured information embedded in the bug-tracking system. In fact, this structured information, which plays an important role in bug triaging, reflects the process of bug tracking and the historical activities. To further improve the performance of automatic bug triaging, in this study, we propose a new representation learning model, PTITransE, for knowledge bases, which extends TransE via enhancing the embeddings with textual entity descriptions and is more suitable for bug triaging. Moreover, we make the first attempt to apply knowledge base and link prediction techniques to bug triaging. For each new bug report, the proposed framework can recommend top-k developers for fixing the bug report by using the learned embeddings of entities and relations. Evaluation is performed on three real-world projects, and the results indicate that our method outperforms baseline bug triaging approaches and can alleviate the cold-start problem in bug triaging. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

22 pages, 2232 KB  
Article
EvoContext: Evolving Contextual Examples by Genetic Algorithm for Enhanced Hyperparameter Optimization Capability in Large Language Models
by Yutian Xu, Guozhong Qin, Yanhao Wang, Panfeng Chen, Xibin Wang, Wei Zhou, Mei Chen and Hui Li
Electronics 2025, 14(11), 2253; https://doi.org/10.3390/electronics14112253 - 31 May 2025
Viewed by 1716
Abstract
Hyperparameter Optimization (HPO) is an important and challenging problem in machine learning. Traditional HPO methods require substantial evaluations to search for superior configurations. Recent Large Language Model (LLM)-based approaches leverage domain knowledge and few-shot learning proficiency to discover promising configurations with minimal human [...] Read more.
Hyperparameter Optimization (HPO) is an important and challenging problem in machine learning. Traditional HPO methods require substantial evaluations to search for superior configurations. Recent Large Language Model (LLM)-based approaches leverage domain knowledge and few-shot learning proficiency to discover promising configurations with minimal human effort. However, the repetition issues causes LLMs to generate configurations similar to context examples, which may confine the optimization process to local regions. Moreover, since LLMs rely on the examples they generate for a few-shot learning, a self-reinforcing loop is formed, hindering LLMs from escaping local optima. In this work, we propose EvoContext, which aims to intentionally generate configurations that differ significantly from examples via external interventions and actively breaks the self-reinforcing effect for a more efficient approximation of the global optimum. Our EvoContext method involves two phases: (i) initial example generation through cold or warm starting and (ii) iterative optimization that integrates genetic operations for updating examples to enhance global exploration capabilities. Additionally, it employs LLMs in-context learning to generate configurations based on competitive examples for local refinement. Experiments on several real-world datasets show that EvoContext outperforms traditional and other LLM-driven approaches on HPO. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

44 pages, 7066 KB  
Article
A Biologically Inspired Trust Model for Open Multi-Agent Systems That Is Resilient to Rapid Performance Fluctuations
by Zoi Lygizou and Dimitris Kalles
Appl. Sci. 2025, 15(11), 6125; https://doi.org/10.3390/app15116125 - 29 May 2025
Cited by 1 | Viewed by 727
Abstract
Trust management provides an alternative solution for securing open, dynamic, and distributed multi-agent systems, where conventional cryptographic methods prove to be impractical. However, existing trust models face challenges such as agent mobility, which causes agents to lose accumulated trust when moving across networks; [...] Read more.
Trust management provides an alternative solution for securing open, dynamic, and distributed multi-agent systems, where conventional cryptographic methods prove to be impractical. However, existing trust models face challenges such as agent mobility, which causes agents to lose accumulated trust when moving across networks; changing behaviors, where previously reliable agents may degrade over time; and the cold start problem, which hinders the evaluation of newly introduced agents due to a lack of prior data. To address these issues, we introduced a biologically inspired trust model in which trustees assess their own capabilities and store trust data locally. This design improves mobility support, reduces communication overhead, resists disinformation, and preserves privacy. Despite these advantages, prior evaluations revealed the limitations of our model in adapting to provider population changes and continuous performance fluctuations. This study proposes a novel algorithm, incorporating a self-classification mechanism for providers to detect performance drops that are potentially harmful for service consumers. The simulation results demonstrate that the new algorithm outperforms its original version and FIRE, a well-known trust and reputation model, particularly in handling dynamic trustee behavior. While FIRE remains competitive under extreme environmental changes, the proposed algorithm demonstrates greater adaptability across various conditions. In contrast to existing trust modeling research, this study conducts a comprehensive evaluation of our model using widely recognized trust model criteria, assessing its resilience against common trust-related attacks while identifying strengths, weaknesses, and potential countermeasures. Finally, several key directions for future research are proposed. Full article
Show Figures

Figure 1

23 pages, 969 KB  
Article
Dynamic Dual-Phase Forecasting Model for New Product Demand Using Machine Learning and Statistical Control
by Chien-Chih Wang
Mathematics 2025, 13(10), 1613; https://doi.org/10.3390/math13101613 - 14 May 2025
Cited by 1 | Viewed by 2069
Abstract
Forecasting demand for newly introduced products presents substantial challenges within high-mix, low-volume manufacturing contexts, primarily due to cold-start conditions and unpredictable order behavior. This research proposes the Dynamic Dual-Phase Forecasting Framework (DDPFF) that amalgamates machine learning-based classification, similarity-driven analogous forecasting, ARMA-based residual compensation, [...] Read more.
Forecasting demand for newly introduced products presents substantial challenges within high-mix, low-volume manufacturing contexts, primarily due to cold-start conditions and unpredictable order behavior. This research proposes the Dynamic Dual-Phase Forecasting Framework (DDPFF) that amalgamates machine learning-based classification, similarity-driven analogous forecasting, ARMA-based residual compensation, and statistical process control for adaptive model refinement. The framework underwent evaluation through five real-world case studies conducted by a Taiwanese semiconductor tray manufacturer, encompassing a variety of scenarios characterized by high volatility, seasonality, and structural drift. The results indicate that DDPFF consistently outperformed conventional ARIMA and analogous forecasting methodologies, yielding an average reduction of 35.7% in mean absolute error and a 41.8% enhancement in residual stability across all examined cases. In one representative instance, the forecast error decreased by 44.9% compared to established benchmarks. These findings underscore the framework’s resilience in cold-start situations and its capacity to adapt to evolving demand patterns, providing a viable solution for data-scarce and dynamic manufacturing environments. Full article
(This article belongs to the Special Issue Applied Statistics in Management Sciences)
Show Figures

Figure 1

Back to TopTop