Learning-Based Branching Acceleration for Unit Commitment with Few Training Samples
Abstract
:1. Introduction
1.1. Background and Motivation
1.2. Literature Survey
1.3. Contribution and Organization
- Branching acceleration for UC transforms the B&B algorithm into a sequential decision problem, employing imitation learning to learn the optimal pruning policy for the B&B tree, thereby enhancing the original policy. This improvement enables the method to effectively prune uncertain, non-optimal nodes, significantly accelerating the B&B algorithm in solving UC problems. It is worth mentioning that by setting the threshold to iteratively expand the search space, the method ensures feasibility while reducing the average search space.
- To further improve the performance of branching acceleration for UC, this study carefully designs the training process and input features of the ML model. Numerical studies demonstrate that the method achieves near-optimal performance with only a few training UC problem instances and low computational complexity, while exhibiting excellent generalization capability under task mismatches. Furthermore, the studies investigate the impact of various threshold settings during the testing phase. Appropriate threshold settings can effectively improve the method’s performance while balancing its various performance metrics.
- In contrast to the idea of discarding the original policy of the B&B algorithm in existing related research. Branching acceleration for UC is independent of the B&B algorithm’s stages. It can be seamlessly combined with B&B improvement methods, leveraging the strengths of each to enhance overall performance.
2. Formulation of Unit Commitment Problem
2.1. Objective Function
2.2. Constraints
- System power balance constraints:
- 2.
- Generation capacity constraints:
- 3.
- Ramp-up/down rate constraints:
- 4.
- Dependency of binary variables:
- 5.
- Minimum up/down time constraints:
- 6.
- Hydropower plant generation capacity constraints:
- 7.
- Hydropower plant daily generation constraints:
3. Branch-and-Bound Algorithm
- Branch selection policy: By assessing the significance of discrete candidate variables, branches are generated to further partition the feasible region, thereby reducing the number of generated nodes.
- Node selection policy: Determine the exploration order of unexplored nodes to guide the B&B algorithm in identifying the node corresponding to the global optimal solution.
- Pruning policy: Deciding whether a node should be further explored can effectively minimize the size of the search tree.
- Prune by bound: The solution of the is a non-integer solution but greater than or equal to . Since and ( is the feasible domain of all the children node of ), the value of is a lower bound on the solution of and the optimal solution of its children cannot be the global optimal solution to the original problem.
- Prune by infeasibility: The relaxed problem is infeasible, i.e., unsolvable. It is considered as .
- Prune by integrality: , is an integer, we have already found an optimal solution to . Therefore, there is no longer a need to search for the children of .
Algorithm 1 The branch-and-bound (B&B) algorithm |
1. , node number , upper bound ; 2. , ; 3. while do 4. ; 5. Pop the node relaxed problem from ; 6. ; 7. if , is an integer then 8. ; 9. else 10. if then 11. Select the non-integer variable to round up and down; 12. Cutting feasible region ; 13. Two children nodes of ; 14. end if 15. end if 16. end while |
4. Branching Acceleration for Unit Commitment
4.1. Enhanced Pruning Policy Based on Imitation Learning
4.2. Feature Design
4.2.1. Branching Tree-Related Features
- Node features: The action taken by the algorithmic policy is closely related to the state of the current node . Therefore, key information about the node is essential for effective decision-making, such as its depth in the B&B search tree and the objective value of the relaxed problem, i.e., .
- Branching features: The action also depends on the branching variable that generated the current node , determined during the variable selection step of its father node. These features include the value of the branching variable at the current node, i.e., , and the value at the root node, i.e., .
- Tree features: Information from the B&B tree search process is critical. This includes the solution to the relaxed problem at the root node, i.e., , the current local upper bound , and the number of solutions identified so far.
4.2.2. UC Domain-Specific Features
- Daily load features: Maximum and minimum daily loads capture crucial load information to establish overall power output limits, thereby affecting unit operation.
- Unit cost features: Operating costs directly influence generation expenses. Within the B&B tree, the generation cost and startup/shutdown costs associated with the current branching variable determine a unit’s potential operating state.
4.3. The Branching Acceleration for UC Framework
4.3.1. Neural Network as Classifier
4.3.2. Training Phase
Algorithm 2 The training process of branching acceleration for unit commitment (UC) |
1. , feature dataset , action dataset , upper bound , training iteration , initial policy ; 2. while do 3. ; 4. while do 5. if then 6. Algorithm 1 is used to solve the -th UC problem; 7. Update the optimal solution and identify optimal nodes ; 8. Label as class preserve and all other nodes as class prune; 9. Store the input features of all nodes to ; 10. ; 11. Determine the correctness of the node action in add the input features of error action nodes and optimal nodes to ; 12. Update the policy model ; 13. ; 14. if then 15. ; 16. ; 17. end if 18. else 19. Go to step 10; 20. end if 21. end while 22. end while |
4.3.3. Testing Phase
Algorithm 3 The testing process of branching acceleration for UC |
1. , node number , testing iteration , upper bound , initial threshold ; 2. , ; 3. while do 4. ; 5. if then 6. , increasing the ; 7. end if 8. while do 9. ; 10. Pop the node relaxed problem from and solve it , , ; 11. if , is an integer then 12. ; 13. else 14. if then 15. ; 16. if then 17. ; 18. else 19. , two children nodes of ; 20. end if 21. end if 22. end if 23. end while 24. end while |
5. Numerical Studies
5.1. Dataset Setup and Performance Evaluation Metrics
5.2. Performance of Branching Acceleration for UC
5.3. Generalization Capability Analysis of the Enhanced Pruning Policy
- When testing the enhanced pruning policy model for performance and generalization capability on the 12-unit standard testing system and the 5-unit practical testing system, it becomes evident that the acceleration effect is somewhat less pronounced compared to other systems. This discrepancy occurs because the B&B algorithm generates a relatively small number of search tree nodes, leaving less space for pruning acceleration. As a result, in these cases, the speedup metric does not fully capture the excellent acceleration effect of branching acceleration for UC.
- The performance and generalization capability evaluations of the 5-unit practical testing system reveal suboptimal results for the optimality gap metric. This can be attributed to the fact that systems with smaller total unit generator capacities and corresponding loads tend to have lower costs and greater cost volatility. Even a minor cost difference is reflected in a more significant optimality gap metric.
5.4. Effect of Threshold Settings on Performance
- In most situations, a difference in the threshold adjustment step will have opposite effects on the speedup and optimality gap metrics. Specifically, as the threshold increases, the speedup metric decreases, while the optimality gap metric exhibits an inverse correlation. This occurs because a higher threshold (not the threshold adjustment step) causes the enhanced pruning policy to search the B&B tree more carefully. As the number of search nodes increases, both the quality of the solutions and the likelihood of finding the optimal solution improve.
- In certain cases, the optimality gap and speedup metrics did not show opposing trends in response to varying threshold adjustment steps. This occurs due to an inappropriate threshold adjustment step setting (particularly when set to 0.01), which increases the number of testing iterations required to find a solution. During the iterative search process, a large number of unnecessary nodes are generated when no solution can be found, leading to poor acceleration performance of the method.
5.5. Application with the Branch-and-Cut Algorithm
6. Conclusions
- Branching acceleration for UC requires only a few dozen training instances to significantly reduce the computational complexity of the B&B algorithm with negligible accuracy loss, dramatically lowering data acquisition and training costs compared to existing methods that typically demand thousands or even hundreds of thousands of training samples.
- The enhanced pruning policy model exhibits outstanding generalization capability, effectively addressing the task mismatch challenge caused by dynamic changes in unit count—an issue that most existing methods struggle to overcome.
- The iterative algorithm ensures feasibility while mitigating the excessive search space issues common in other B&B improvements.
- Adjusting the iteration threshold further optimizes the method’s performance and allows it to meet the specific requirements of different UC problems in terms of computational efficiency and accuracy loss.
- Branching acceleration for UC is relatively independent of the B&B algorithm’s stages, enabling seamless integration with other B&B improvement methods to further enhance overall performance.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
UC | Unit commitment |
MILP | Mixed-integer linear programming |
NP-hard | Non-deterministic polynomial-time hard |
ML | Machine learning |
B&B | Branch-and-bound |
GRU | Gated recurrent unit |
KNN | K-nearest neighbor |
SVM | Support vector machines |
MLP | Multi-layer perceptron |
B&C | Branch-and-cut |
Appendix A
Unit ID | Type | (MW) | (MW) | (MW/15 min) | (minutes) | (€) | (€/MW) |
---|---|---|---|---|---|---|---|
1 | Base | 400 | 200 | 30 | 36 | 15,000 | 5 |
2 | Base | 400 | 200 | 32 | 34 | 15,000 | 5.25 |
3 | Base | 400 | 200 | 34 | 32 | 15,000 | 5.5 |
4 | Medium | 300 | 100 | 26 | 20 | 12,000 | 12.5 |
5 | Medium | 300 | 100 | 30 | 19 | 12,000 | 12.75 |
6 | Medium | 300 | 100 | 34 | 18 | 12,000 | 13 |
7 | Medium | 300 | 100 | 38 | 17 | 12,000 | 13.25 |
8 | Peak | 250 | 0 | 31 | 1 | 7500 | 80 |
9 | Peak | 250 | 0 | 32 | 1 | 7500 | 81 |
10 | Peak | 250 | 0 | 33 | 1 | 7500 | 82 |
11 | Peak | 250 | 0 | 34 | 1 | 7500 | 83 |
12 | Peak | 250 | 0 | 35 | 1 | 7500 | 84 |
References
- Shahidehpour, M.; Yamin, H.; Li, Z. Market Overview in Electric Power Systems. In Market Operations in Electric Power Systems; Wiley-IEEE Press Publishing: New York, NY, USA, 2002; pp. 1–20. [Google Scholar]
- Ostrowski, J.; Anjos, M.F.; Vannelli, A. Tight Mixed Integer Linear Programming Formulations for the Unit Commitment Problem. IEEE Trans. Power Syst. 2012, 27, 39–46. [Google Scholar] [CrossRef]
- Yan, B.; Luh, P.B.; Zheng, T.; Schiro, D.A.; Bragin, M.A.; Zhao, F.; Zhao, J.; Lelic, I. A Systematic Formulation Tightening Approach for Unit Commitment Problems. IEEE Trans. Power Syst. 2020, 35, 782–794. [Google Scholar] [CrossRef]
- Wu, J.; Luh, P.B.; Chen, Y.; Yan, B.; Bragin, M.A. Synergistic Integration of Machine Learning and Mathematical Optimization for Unit Commitment. IEEE Trans. Power Syst. 2024, 39, 391–401. [Google Scholar] [CrossRef]
- Qu, M.; Ding, T.; Sun, Y.; Mu, C.; Pan, K.; Shahidehpour, M. Convex Hull Model for a Single-Unit Commitment Problem With Pumped Hydro Storage Unit. IEEE Trans. Power Syst. 2023, 38, 4867–4880. [Google Scholar] [CrossRef]
- Lin, W.-M.; Yang, C.-Y.; Tsai, M.-T.; Wang, Y.-H. Unit Commitment with Ancillary Services in a Day-Ahead Power Market. Appl. Sci. 2021, 11, 5454. [Google Scholar] [CrossRef]
- Chen, Y.; Pan, F.; Holzer, J.; Rothberg, E.; Ma, Y.; Veeramany, A. A High Performance Computing Based Market Economics Driven Neighborhood Search and Polishing Algorithm for Security Constrained Unit Commitment. IEEE Trans. Power Syst. 2021, 36, 292–302. [Google Scholar] [CrossRef]
- Quarm, E.; Madani, R. Scalable Security-Constrained Unit Commitment Under Uncertainty via Cone Programming Relaxation. IEEE Trans. Power Syst. 2021, 36, 4733–4744. [Google Scholar] [CrossRef]
- Nair, V.; Bartunov, S.; Gimeno, F.; Glehn, I.V.; Lichocki, P.; Lobov, I.; O’Donoghue, B.; Sonnerat, N.; Tjandraatmadja, C.; Wang, P.; et al. Solving Mixed Integer Programs Using Neural Networks. arXiv 2020, arXiv:2012.13349. [Google Scholar]
- Shen, Y.; Shi, Y.; Zhang, J.; Letaief, K.B. LORM: Learning to Optimize for Resource Management in Wireless Networks With Few Training Samples. IEEE Trans. Wireless Commun. 2020, 19, 665–679. [Google Scholar] [CrossRef]
- Sang, L.; Xu, Y.; Sun, H. Ensemble Provably Robust Learn-to-Optimize Approach for Security-Constrained Unit Commitment. IEEE Trans. Power Syst. 2023, 38, 5073–5087. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhang, D.; Qiu, R.C. Deep reinforcement learning for power system applications: An overview. CSEE J. Power Energy Syst. 2020, 6, 213–225. [Google Scholar] [CrossRef]
- Gao, Q.; Yang, Z.; Li, W.; Yu, J.; Lu, Y. Online Learning of Stable Integer Variables in Unit Commitment Using Internal Information. IEEE Trans. Power Syst. 2023, 38, 2947–2950. [Google Scholar] [CrossRef]
- Yan, J.; Li, Y.; Yao, J.; Yang, S.; Li, F.; Zhu, K. Look-Ahead Unit Commitment With Adaptive Horizon Based on Deep Reinforcement Learning. IEEE Trans. Power Syst. 2024, 39, 3673–3684. [Google Scholar] [CrossRef]
- Bengio, Y.; Lodi, A.; Prouvost, A. Machine learning for combinatorial optimization: A methodological tour d’horizon. Eur. J. Oper. Res. 2021, 290, 405–421. [Google Scholar] [CrossRef]
- Navin, N.K.; Sharma, R. A fuzzy reinforcement learning approach to thermal unit commitment problem. Neural Comput. Appl. 2019, 31, 737–750. [Google Scholar] [CrossRef]
- Lin, X.; Hou, Z.J.; Ren, H.; Pan, F. Approximate Mixed-Integer Programming Solution with Machine Learning Technique and Linear Programming Relaxation. In Proceedings of the 2019 3rd International Conference on Smart Grid and Smart Cities (ICSGSC), Berkeley, CA, USA, 25–28 June 2019; pp. 101–107. [Google Scholar] [CrossRef]
- Yang, N.; Yang, C.; Xing, C.; Ye, D.; Jia, J.; Chen, D.; Shen, X.; Huang, Y.; Zhang, L.; Zhu, B. Deep learning-based SCUC decision-making: An intelligent data-driven approach with self-learning capabilities. IET Gener. Transm. Distrib. 2022, 16, 629–640. [Google Scholar] [CrossRef]
- Iqbal, T.; Banna, H.U.; Feliachi, A.; Choudhry, M. Solving Security Constrained Unit Commitment Problem Using Inductive Learning. In Proceedings of the 2022 IEEE Kansas Power and Energy Conference (KPEC), Manhattan, KS, USA, 25–26 April 2022; pp. 1–4. [Google Scholar] [CrossRef]
- Lodi, A.; Zarpellon, G. On learning and branching: A survey. Top 2017, 25, 207–236. [Google Scholar] [CrossRef]
- Sun, Y.; Wu, J.; Zhang, G.; Zhang, L.; Li, R. An ultra-fast optimization algorithm for unit commitment based on neural branching. Energy Rep. 2023, 9, 1112–1120. [Google Scholar] [CrossRef]
- Jiménez-Cordero, A.; Morales, J.M.; Pineda, S. Warm-starting constraint generation for mixed-integer optimization: A Machine Learning approach. Knowl.-Based Syst. 2022, 253, 109570. [Google Scholar] [CrossRef]
- Yang, Y.; Wu, L. Machine learning approaches to the unit commitment problem: Current trends, emerging challenges, and new strategies. Electr. J. 2021, 34, 106889. [Google Scholar] [CrossRef]
- Zhou, Y.; Zhai, Q.; Wu, L.; Shahidehpour, M. A Data-driven Variable Reduction Approach for Transmission-constrained Unit Commitment of Large-scale Systems. J. Mod. Power Syst. Clean Energy 2023, 11, 254–266. [Google Scholar] [CrossRef]
- Scavuzzo, L.; Aardal, K.; Lodi, A.; Yorke-Smith, N. Machine learning augmented branch and bound for mixed integer linear programming. Math. Program. 2024, 1–44. [Google Scholar] [CrossRef]
- He, H.; Daumé, H., III; Eisner, J.M. Learning to Search in Branch and Bound Algorithms. In Proceedings of the Advances in Neural Information Processing Systems, Cambridge, MA, USA, 8 December 2014; pp. 3293–3301. [Google Scholar]
- Wang, Q. Empowering branch-and-bound algorithms via reinforcement and imitation learning for enhanced search. Appl. Soft Comput. 2025, 170, 112690. [Google Scholar] [CrossRef]
- Lee, M.; Yu, G.; Li, G.Y. Learning to Branch: Accelerating Resource Allocation in Wireless Networks. IEEE Trans. Veh. Technol. 2022, 69, 958–970. [Google Scholar] [CrossRef]
- Gao, Q.; Yang, Z.; Yin, W.; Li, W.; Yu, J. Internally Induced Branch-and-Cut Acceleration for Unit Commitment Based on Improvement of Upper Bound. IEEE Trans. Power Syst. 2022, 37, 2455–2458. [Google Scholar] [CrossRef]
- Conforti, M.; Cornuéjols, G.; Zambelli, G. Integer Programming Models. In Integer Programming; Springer International Publishing: Cham, Switzerland, 2014; pp. 45–84. [Google Scholar]
- Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489. [Google Scholar] [CrossRef]
- Ross, S.; Gordon, G.J.; Bagnell, J.A. A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 11–13 April 2011; pp. 627–635. [Google Scholar] [CrossRef]
- Balcan, M.-F.; Dick, T.; Sandholm, T.; Vitercik, E. Learning to Branch. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 344–353. [Google Scholar] [CrossRef]
- Pineda, S.; Fernández-Blanco, R.; Morales, J. Time-Adaptive Unit Commitment. IEEE Trans. Power Syst. 2019, 34, 3869–3878. [Google Scholar] [CrossRef]
- Real Time System Information. Available online: https://www.eirgrid.ie/grid/real-time-system-information (accessed on 30 July 2024).
- Sun, X.; Luh, P.B.; Bragin, M.A.; Chen, Y.; Wan, J.; Wang, F. A Novel Decomposition and Coordination Approach for Large Day-Ahead Unit Commitment with Combined Cycle Units. IEEE Trans. Power Syst. 2018, 33, 5297–5308. [Google Scholar] [CrossRef]
- Cornuéjols, G. Valid inequalities for mixed integer linear programs. Math. Program. 2008, 112, 3–44. [Google Scholar] [CrossRef]
No. of Units | No. of Binary Variables | No. of Continuous Variables | |
---|---|---|---|
Standard testing system | 12 | 3456 | 1152 |
24 | 6912 | 2304 | |
36 | 10,368 | 3456 | |
Practical testing system | 5 | 1440 | 14,016 |
10 | 2880 | 14,496 | |
12 | 3456 | 14,688 | |
15 | 4320 | 14,976 |
No. of Units | Average Node Count in B&B Tree | Average Calculation Time (s) | |
---|---|---|---|
Standard testing system | 12 | 99 | 24.07 |
24 | 685 | 289.10 | |
36 | 3716 | 1710.93 | |
Practical testing system | 5 | 131 | 26.38 |
10 | 799 | 186.74 | |
12 | 598 | 146.45 | |
15 | 852 | 352.20 |
No. of training daily load profiles | 20 | 25 | 30 |
No. of testing daily load profiles | 50 | 50 | 50 |
Average node count in B&B tree | 94 | 82 | 63 |
Average calculation time (s) | 40.88 | 35.84 | 27.86 |
Speedup | 7.26× | 8.36× | 10.90× |
Optimality gap | 0.09% | 0.06% | 0.08% |
No. of training daily load profiles | 20 | 25 | 30 |
No. of testing daily load profiles | 50 | 50 | 50 |
Average node count in B&B tree | 65 | 73 | 51 |
Average calculation time (s) | 17.96 | 19.77 | 14.79 |
Speedup | 11.92× | 10.64× | 15.58× |
Optimality gap | 0.05% | 0.03% | 0.05% |
Standard Testing System | Practical Testing System | ||||||
---|---|---|---|---|---|---|---|
No. of Units | 12 | 24 | 36 | 5 | 10 | 12 | 15 |
Average node count in B&B tree | 31 | 63 | 209 | 26 | 51 | 64 | 77 |
Average calculation time (s) | 8.38 | 27.86 | 97.44 | 7.48 | 14.79 | 18.32 | 26.20 |
Speedup | 3.21× | 10.90× | 17.75× | 5.11× | 15.58× | 9.36× | 11.01× |
Optimality gap | 0.11% | 0.08% | 0.04% | 0.64% | 0.05% | 0.04% | 0.04% |
Standard Testing System | Practical Testing System | ||||
---|---|---|---|---|---|
No. of Units for Training | 24 | 10 | |||
No. of Units for Testing | 12 | 36 | 5 | 12 | 15 |
Average node count in B&B tree | 45 | 313 | 24 | 46 | 62 |
Average calculation time (s) | 11.56 | 145.67 | 7.39 | 13.93 | 21.62 |
Speedup | 2.21× | 11.86× | 5.36× | 13.11× | 13.71× |
Optimality gap | 0.08% | 0.05% | 0.96% | 0.06% | 0.08% |
0.01 | 0.02 | 0.03 | 0.04 | 0.05 | ||
---|---|---|---|---|---|---|
No. of Units | ||||||
5 | 8.72× | 8.79× | 5.36× | 2.89× | 2.54× | |
10 | 13.51× | 15.40× | 15.58× | 9.17× | 7.77× | |
12 | 8.38× | 9.13× | 13.11× | 8.42× | 10.99× | |
15 | 10.30× | 14.62× | 13.71× | 15.14× | 14.66× |
0.01 | 0.02 | 0.03 | 0.04 | 0.05 | ||
---|---|---|---|---|---|---|
No. of Units | ||||||
5 | 1.37% | 1.32% | 0.96% | 0.81% | 0.75% | |
10 | 0.06% | 0.03% | 0.05% | 0.01% | 0.01% | |
12 | 0.10% | 0.09% | 0.06% | 0.07% | 0.06% | |
15 | 0.11% | 0.10% | 0.08% | 0.07% | 0.05% |
24-Unit Standard Testing System | 10-Unit Practical Testing System | |||||
---|---|---|---|---|---|---|
B&C | B&B + Branching Acceleration for UC | B&C + Branching Acceleration for UC | B&C | B&B + Branching Acceleration for UC | B&C + Branching Acceleration for UC | |
Average node count in B&B tree | 480 | 63 | 57 | 546 | 51 | 41 |
Speedup | 1.43× | 10.90× | 12.01× | 1.42× | 15.58× | 18.81× |
Optimality gap | 0% | 0.08% | 0.07% | 0% | 0.05% | 0.02% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, C.; Qin, Z.; Sun, Y. Learning-Based Branching Acceleration for Unit Commitment with Few Training Samples. Appl. Sci. 2025, 15, 3366. https://doi.org/10.3390/app15063366
Zhang C, Qin Z, Sun Y. Learning-Based Branching Acceleration for Unit Commitment with Few Training Samples. Applied Sciences. 2025; 15(6):3366. https://doi.org/10.3390/app15063366
Chicago/Turabian StyleZhang, Chi, Zhijun Qin, and Yan Sun. 2025. "Learning-Based Branching Acceleration for Unit Commitment with Few Training Samples" Applied Sciences 15, no. 6: 3366. https://doi.org/10.3390/app15063366
APA StyleZhang, C., Qin, Z., & Sun, Y. (2025). Learning-Based Branching Acceleration for Unit Commitment with Few Training Samples. Applied Sciences, 15(6), 3366. https://doi.org/10.3390/app15063366