A Two-Stage Generative Architecture for Renewable Scenario Generation Based on Temporal Scenario Representation and Diffusion Models
Abstract
:1. Introduction
- A novel two-stage generative architecture is designed for renewable scenario generation using diffusion models. The diffusion model is extended to the conditional implicit one, enabling deterministic generation of specific scenarios that effectively capture the complex patterns of renewable energy.
- This paper proposes a time series representation module suitable for renewable energy scenarios, which reduces computational complexity through patching. Innovatively, the module introduces a priori temporal knowledge learning, effectively addressing the problem that in diffusion models it is difficult to model temporal correlations.
2. Time Series Representation and Diffusion Models
2.1. Problem Formulation
2.2. Time Series Representation
2.3. Denoising Diffusion Probabilistic Models
3. Methodology
3.1. Scenario Representation of the First Stage
3.2. The Diffusion Model of the Second Stage
Algorithm 1 The Diffusion Model Training |
|
4. Case Studies
4.1. Experiment Settings
4.2. Performance in Scenario Rrepresentation
4.3. Performance in Scenario Generation
4.4. Quality Results
4.5. Conditional Scenario Generation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhang, Q.; Shukla, A.; Xie, L. Efficient Scenario Generation for Chance-Constrained Economic Dispatch Considering Ambient Wind Conditions. IEEE Trans. Power Syst. 2024, 39, 5969–5980. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, B.; Watada, J. Stochastic distributionally robust unit commitment with deep scenario clustering. Electr. Power Syst. Res. 2023, 224, 109710. [Google Scholar] [CrossRef]
- Li, Z.; Xie, X.; Cheng, Z.; Zhi, C.; Si, J. A novel two-stage energy management of hybrid AC/DC microgrid considering frequency security constraints. Int. J. Electr. Power Energy Syst. 2023, 146, 108768. [Google Scholar] [CrossRef]
- Xu, M.; Li, W.; Feng, Z.; Bai, W.; Jia, L.; Wei, Z. Economic Dispatch Model of High Proportional New Energy Grid-Connected Consumption Considering Source Load Uncertainty. Energies 2023, 16, 1696. [Google Scholar] [CrossRef]
- Camal, S.; Teng, F.; Michiorri, A.; Kariniotakis, G.; Badesa, L. Scenario generation of aggregated Wind, Photovoltaics and small Hydro production for power systems applications. Appl. Energy 2019, 242, 1396–1406. [Google Scholar] [CrossRef]
- Fei, Z.; Yang, H.; Du, L.; Guerrero, J.M.; Meng, K.; Li, Z. Two-stage coordinated operation of a green multi-energy ship microgrid with underwater radiated noise by distributed stochastic approach. IEEE Trans. Smart Grid 2024, 16, 1062–1074. [Google Scholar] [CrossRef]
- Liao, W.; Yang, Z.; Chen, X.; Li, Y. WindGMMN: Scenario forecasting for wind power using generative moment matching networks. IEEE Trans. Artif. Intell. 2021, 3, 843–850. [Google Scholar] [CrossRef]
- Li, H.; Ren, Z.; Xu, Y.; Li, W.; Hu, B. A Multi-Data Driven Hybrid Learning Method for Weekly Photovoltaic Power Scenario Forecast. IEEE Trans. Sustain. Energy 2022, 13, 91–100. [Google Scholar] [CrossRef]
- Stappers, B.; Paterakis, N.G.; Kok, K.; Gibescu, M. A class-driven approach based on long short-term memory networks for electricity price scenario generation and reduction. IEEE Trans. Power Syst. 2020, 35, 3040–3050. [Google Scholar] [CrossRef]
- Pu, Y.; Gan, Z.; Henao, R.; Yuan, X.; Li, C.; Stevens, A.; Carin, L. Variational autoencoder for deep learning of images, labels and captions. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Ser. NIPS’16, Barcelona, Spain, 5–10 December 2016; Curran Associates Inc.: Red Hook, NY, USA, 2016; pp. 2360–2368. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, Y.; Kirschen, D.; Zhang, B. Model-Free Renewable Scenario Generation Using Generative Adversarial Networks. In Proceedings of the 2019 IEEE Power & Energy Society General Meeting (PESGM), Atlanta, GA, USA, 4–8 August 2019. [Google Scholar]
- Li, Y.; Li, J.; Wang, Y. Privacy-Preserving Spatiotemporal Scenario Generation of Renewable Energies: A Federated Deep Generative Learning Approach. IEEE Trans. Ind. Inform. 2022, 18, 2310–2320. [Google Scholar] [CrossRef]
- Jiang, C.; Mao, Y.; Chai, Y.; Yu, M. Day-ahead renewable scenario forecasts based on generative adversarial networks. Int. J. Energy Res. 2021, 45, 7572–7587. [Google Scholar] [CrossRef]
- Yuan, R.; Wang, B.; Sun, Y.; Song, X.; Watada, J. Conditional Style-Based Generative Adversarial Networks for Renewable Scenario Generation. IEEE Trans. Power Syst. 2023, 38, 1281–1296. [Google Scholar] [CrossRef]
- Zhanga, H.; Hua, W.; Yub, R.; Tangb, M.; Dingc, L. Optimized operation of cascade reservoirs considering complementary characteristics between wind and photovoltaic based on variational auto-encoder. In MATEC Web of Conferences; EDP Sciences: Les Ulis, France, 2018; Volume 246, p. 01077. [Google Scholar]
- Zheng, Z.; Yang, L.; Zhang, Z. Conditional Variational Autoencoder Informed Probabilistic Wind Power Curve Modeling. IEEE Trans. Sustain. Energy 2023, 14, 2445–2460. [Google Scholar] [CrossRef]
- Ho, J.; Jain, A.; Abbeel, P. Denoising diffusion probabilistic models. In Proceedings of the 34th International Conference on Neural Information Processing Systems, Ser. NIPS ’20, Vancouver, BC, Canada, 6–12 December 2020; Curran Associates Inc.: Red Hook, NY, USA, 2020; pp. 6840–6851. [Google Scholar]
- Dhariwal, P.; Nichol, A. Diffusion models beat GANs on image synthesis. Adv. Neural Inf. Process. Syst. 2021, 34, 8780–8794. [Google Scholar]
- Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; Ommer, B. High-Resolution Image Synthesis with Latent Diffusion Models. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 10674–10685. [Google Scholar]
- Li, S.; Xiong, H.; Chen, Y. DiffCharge: Generating EV Charging Scenarios via a Denoising Diffusion Model. IEEE Trans. Smart Grid 2024, 15, 3936–3949. [Google Scholar] [CrossRef]
- Franceschi, J.-Y.; Dieuleveut, A.; Jaggi, M. Unsupervised scalable representation learning for multivariate time series. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Curran Associates Inc.: Red Hook, NY, USA, 2019; Volume 418, pp. 4650–4661. [Google Scholar]
- Tonekaboni, S.; Eytan, D.; Goldenberg, A. Unsupervised Representation Learning for Time Series with Temporal Neighborhood Coding. In Proceedings of the International Conference on Learning Representations, Virtual Event, 3–7 May 2021. [Google Scholar]
- Zerveas, G.; Jayaraman, S.; Patel, D.; Bhamidipaty, A.; Eickhoff, C. A Transformer-based Framework for Multivariate Time Series Representation Learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, ser. KDD ’21, Virtual Event, 14–18 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 2114–2124. [Google Scholar]
- He, K.; Chen, X.; Xie, S.; Li, Y.; Dollar, P.; Girshick, R. Masked Autoencoders Are Scalable Vision Learners. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 15979–15988. [Google Scholar]
- Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. In Proceedings of the The Eleventh International Conference on Learning Representations (ICLR), Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
- Al-Rfou, R.; Choe, D.; Constant, N.; Guo, M.; Jones, L. Character-Level Language Modeling with Deeper Self-Attention. Proc. AAAI Conf. Artif. Intell. 2019, 33, 3159–3166. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Munich, Germany, 5–9 October 2015. [Google Scholar]
- Liu, Z.; Mao, H.; Wu, C.-Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11966–11976. [Google Scholar]
- Draxl, C.; Clifton, A.; Hodge, B.-M.; McCaa, J. The Wind Integration National Dataset (WIND) Toolkit. Appl. Energy 2015, 151, 355–366. [Google Scholar] [CrossRef]
- Dumas, J.; Wehenkel, A.; Lanaspeze, D.; Cornélusse, B.; Sutera, A. A deep generative model for probabilistic energy forecasting in power systems: Normalizing flows. Appl. Energy 2022, 305, 117871. [Google Scholar] [CrossRef]
Parameter | Definition | Value |
---|---|---|
P | The patch length | 12 |
S | The patches interval | 0 |
The number of encoder layers | 3 | |
The embedding vector dimension | 128 | |
The number of training epochs | 100 | |
The number of fine-tuned epochs | 16 | |
B | The batch size of training | 32 |
lr | The learning rate | 0.001 |
Parameter | Definition | Value |
---|---|---|
The number of downsampling residual layers | 3 | |
The number of upsampling residual layers | 3 | |
The number of ConvNext blocks in the residual layer | 2 | |
, , | The channel dimension of the convolution | 24, 48, 96 |
The size of the convolution kernel | ||
() | The noise schedule | (0.0001, 0.1) |
T | The diffusion steps | 100 |
B | The batch size of training | 64 |
The number of training epochs | 200 | |
lr | The learning rate | 0.001 |
Proposed | DDPM | VAE | LSGAN | ||
---|---|---|---|---|---|
Wind | RMSE | 0.5184 | 1.1210 | 0.8240 | 0.9100 |
MAE | 0.3857 | 0.8236 | 0.6318 | 0.6774 | |
MMD | 0.2374 | 0.5342 | 0.3089 | 0.3937 | |
ES | 2.1907 | 4.3107 | 3.7771 | 4.2895 | |
VS | 23.6096 | 92.2627 | 80.9852 | 85.7794 | |
CR | 99.8863 | 80.3226 | 85.1072 | 81.5971 | |
Solar | RMSE | 0.1395 | 0.9531 | 0.1824 | 0.2465 |
MAE | 0.0838 | 0.5667 | 0.1101 | 0.1536 | |
MMD | 0.2819 | 3.2079 | 0.3660 | 0.3345 | |
ES | 0.2352 | 1.9264 | 0.3550 | 0.3509 | |
VS | 1.0492 | 20.5207 | 2.4495 | 3.6483 | |
CR | 99.4531 | 81.4068 | 87.6427 | 82.1662 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, C.; Xu, P.; Dai, Y.; Su, S.; Zhang, L.; Zhang, J.; Bai, Y.; Gao, T.; Xie, Q.; Shang, L.; et al. A Two-Stage Generative Architecture for Renewable Scenario Generation Based on Temporal Scenario Representation and Diffusion Models. Energies 2025, 18, 1275. https://doi.org/10.3390/en18051275
Xu C, Xu P, Dai Y, Su S, Zhang L, Zhang J, Bai Y, Gao T, Xie Q, Shang L, et al. A Two-Stage Generative Architecture for Renewable Scenario Generation Based on Temporal Scenario Representation and Diffusion Models. Energies. 2025; 18(5):1275. https://doi.org/10.3390/en18051275
Chicago/Turabian StyleXu, Chenglong, Peidong Xu, Yuxin Dai, Shi Su, Luxi Zhang, Jun Zhang, Yuyang Bai, Tianlu Gao, Qingyang Xie, Lei Shang, and et al. 2025. "A Two-Stage Generative Architecture for Renewable Scenario Generation Based on Temporal Scenario Representation and Diffusion Models" Energies 18, no. 5: 1275. https://doi.org/10.3390/en18051275
APA StyleXu, C., Xu, P., Dai, Y., Su, S., Zhang, L., Zhang, J., Bai, Y., Gao, T., Xie, Q., Shang, L., & Gao, W. (2025). A Two-Stage Generative Architecture for Renewable Scenario Generation Based on Temporal Scenario Representation and Diffusion Models. Energies, 18(5), 1275. https://doi.org/10.3390/en18051275