Next Article in Journal
Multi-Scale TsMixer: A Novel Time-Series Architecture for Predicting A-Share Stock Index Futures
Previous Article in Journal
Numerical Solution of Nonlinear Quadratic Integral Equation of Hammerstein Type Based on Fixed-Point Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Aggregation and Pruning for Continuous Incremental Multi-Task Inference

1
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
2
State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
3
Hunan Vanguard Group Corporation Limited, Changsha 410100, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(9), 1414; https://doi.org/10.3390/math13091414
Submission received: 5 March 2025 / Revised: 14 April 2025 / Accepted: 23 April 2025 / Published: 25 April 2025
(This article belongs to the Special Issue Research on Graph Neural Networks and Knowledge Graph)

Abstract

In resource-constrained mobile systems, efficiently handling incrementally added tasks under dynamically evolving requirements is a critical challenge. To address this, we propose aggregate pruning (AP), a framework that combines pruning with filter aggregation to optimize deep neural networks for continuous incremental multi-task learning (MTL). The approach reduces redundancy by dynamically pruning and aggregating similar filters across tasks, ensuring efficient use of computational resources while maintaining high task-specific performance. The aggregation strategy enables effective filter sharing across tasks, significantly reducing model complexity. Additionally, an adaptive mechanism is incorporated into AP to adjust filter sharing based on task similarity, further enhancing efficiency. Experiments on different backbone networks, including LeNet, VGG, ResNet, and so on, show that AP achieves substantial parameter reduction and computational savings with minimal accuracy loss, outperforming existing pruning methods and even surpassing non-pruning MTL techniques. The architecture-agnostic design of AP also enables potential extensions to complex architectures like graph neural networks (GNNs), offering a promising solution for incremental multi-task GNNs.
Keywords: pruning; multi-task learning; graph neural networks pruning; multi-task learning; graph neural networks

Share and Cite

MDPI and ACS Style

Li, L.; Cen, F.; Feng, Q.; Xu, J. Aggregation and Pruning for Continuous Incremental Multi-Task Inference. Mathematics 2025, 13, 1414. https://doi.org/10.3390/math13091414

AMA Style

Li L, Cen F, Feng Q, Xu J. Aggregation and Pruning for Continuous Incremental Multi-Task Inference. Mathematics. 2025; 13(9):1414. https://doi.org/10.3390/math13091414

Chicago/Turabian Style

Li, Lining, Fenglin Cen, Quan Feng, and Ji Xu. 2025. "Aggregation and Pruning for Continuous Incremental Multi-Task Inference" Mathematics 13, no. 9: 1414. https://doi.org/10.3390/math13091414

APA Style

Li, L., Cen, F., Feng, Q., & Xu, J. (2025). Aggregation and Pruning for Continuous Incremental Multi-Task Inference. Mathematics, 13(9), 1414. https://doi.org/10.3390/math13091414

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop