Next Article in Journal
A Graph-Induced Neighborhood Search Heuristic for the Capacitated Multicommodity Network Design Problem
Previous Article in Journal
Modified F(R,T)-Gravity Model Coupled with Magnetized Strange Quark Matter Fluid
Previous Article in Special Issue
Optimization of Autoencoders for Speckle Reduction in SAR Imagery Through Variance Analysis and Quantitative Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Foundations and Innovations in Data Fusion and Ensemble Learning for Effective Consensus

1
School of Mechanical and Electrical Engineering, Guangdong University of Science and Technology, Dongguan 523668, China
2
Zhejiang Yugong Information Technology Co., Ltd., Changhe Road 475, Hangzhou 310002, China
3
Shenzhen Feng Xing Tai Bao Technology Co., Ltd., Shenzhen 518063, China
4
Faculty of Electromechanical Engineering, Guangdong University of Technology, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(4), 587; https://doi.org/10.3390/math13040587
Submission received: 1 January 2025 / Revised: 2 February 2025 / Accepted: 8 February 2025 / Published: 11 February 2025

Abstract

Ensemble learning and data fusion techniques play a crucial role in modern machine learning, enhancing predictive performance, robustness, and generalization. This paper provides a comprehensive survey of ensemble methods, covering foundational techniques such as bagging, boosting, and random forests, as well as advanced topics including multiclass classification, multiview learning, multiple kernel learning, and the Dempster–Shafer theory of evidence. We present a comparative analysis of ensemble learning and deep learning, highlighting their respective strengths, limitations, and synergies. Additionally, we examine the theoretical foundations of ensemble methods, including bias–variance trade-offs, margin theory, and optimization-based frameworks, while analyzing computational trade-offs related to training complexity, inference efficiency, and storage requirements. To enhance accessibility, we provide a structured comparative summary of key ensemble techniques. Furthermore, we discuss emerging research directions, such as adaptive ensemble methods, hybrid deep learning approaches, and multimodal data fusion, as well as challenges related to interpretability, model selection, and handling noisy data in high-stakes applications. By integrating theoretical insights with practical considerations, this survey serves as a valuable resource for researchers and practitioners seeking to understand the evolving landscape of ensemble learning and its future prospects.
Keywords: ensemble learning; bagging; boosting; random forests; deep learning integration; multimodal data fusion ensemble learning; bagging; boosting; random forests; deep learning integration; multimodal data fusion

Share and Cite

MDPI and ACS Style

Du, K.-L.; Zhang, R.; Jiang, B.; Zeng, J.; Lu, J. Foundations and Innovations in Data Fusion and Ensemble Learning for Effective Consensus. Mathematics 2025, 13, 587. https://doi.org/10.3390/math13040587

AMA Style

Du K-L, Zhang R, Jiang B, Zeng J, Lu J. Foundations and Innovations in Data Fusion and Ensemble Learning for Effective Consensus. Mathematics. 2025; 13(4):587. https://doi.org/10.3390/math13040587

Chicago/Turabian Style

Du, Ke-Lin, Rengong Zhang, Bingchun Jiang, Jie Zeng, and Jiabin Lu. 2025. "Foundations and Innovations in Data Fusion and Ensemble Learning for Effective Consensus" Mathematics 13, no. 4: 587. https://doi.org/10.3390/math13040587

APA Style

Du, K.-L., Zhang, R., Jiang, B., Zeng, J., & Lu, J. (2025). Foundations and Innovations in Data Fusion and Ensemble Learning for Effective Consensus. Mathematics, 13(4), 587. https://doi.org/10.3390/math13040587

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop