A Novel LSTM Model with Interaction Dual Attention for Radar Echo Extrapolation
Abstract
:1. Introduction
- We first develop the interaction scheme to enhance the short-term dependency modeling ability of ConvRNN approaches. The interaction scheme is a general framework, which can be applied in any ConvRNN model.
- We introduce the dual attention mechanism to combine the long-term temporal and channel information for the temporal memory cell. The mechanism helps recall the long-term dependency and form better spatiotemporal representation.
- By applying the interaction scheme and the dual attention mechanism, we propose our IDA-LSTM approach for radar echo map extrapolation. Comprehensive experiments have been conducted. The IDA-LSTM achieves state-of-the-art results, especially in the high radar echo region, on the CIKM AnalytiCup 2017 radar datasets. To reproduce the results, we release the source code at: https://github.com/luochuyao/IDA_LSTM.
2. Proposed Method
2.1. Interaction Framework
2.2. Dual Attention Mechanism
2.3. The IDA-LSTM Unit
2.4. The IDA-LSTM Extrapolation Architecture
3. Experiment
3.1. Experimental Setup
3.1.1. Dataset
3.1.2. Evaluation Metrics
3.1.3. Parameter and Training Setting
3.2. Experimental Results
3.3. Ablation Study
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Marshall, J.S.; Palmer, W.M.K. The distribution of raindrops with size. J. Meteorol. 1948, 5, 165–166. [Google Scholar] [CrossRef]
- Wong, W.; Yeung, L.H.; Wang, Y.; Chen, M. Towards the Blending of NWP with Nowcast—Operation Experience in B08FDP. In Proceedings of the WMO Symposium on Nowcasting, Hong Kong, China, 25–29 July 2009; Volume 30. [Google Scholar]
- Woo, W.C.; Wong, W.K. Operational application of optical flow techniques to radar-based rainfall nowcasting. Atmosphere 2017, 8, 48. [Google Scholar] [CrossRef] [Green Version]
- Shi, X.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 802–810. [Google Scholar]
- Ballas, N.; Yao, L.; Pal, C.; Courville, A. Delving deeper into convolutional networks for learning video representations. In Proceedings of the International Conference on Learning Representations (ICLR), San Juan, PR, USA, 2–4 May 2016. [Google Scholar]
- Shi, X.; Gao, Z.; Lausen, L.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Deep learning for precipitation nowcasting: A benchmark and a new model. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5617–5627. [Google Scholar]
- Tran, Q.K.; Song, S.k. Computer vision in precipitation nowcasting: Applying image quality assessment metrics for training deep neural networks. Atmosphere 2019, 10, 244. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Zhang, J.; Zhu, H.; Long, M.; Wang, J.; Yu, P.S. Memory in memory: A predictive neural network for learning higher-order non-stationarity from spatiotemporal dynamics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 9154–9162. [Google Scholar]
- Ayzel, G.; Heistermann, M.; Sorokin, A.; Nikitin, O.; Lukyanova, O. All convolutional neural networks for radar-based precipitation nowcasting. Procedia Comput. Sci. 2019, 150, 186–192. [Google Scholar] [CrossRef]
- Shi, E.; Li, Q.; Gu, D.; Zhao, Z. A method of weather radar echo extrapolation based on convolutional neural networks. In Proceedings of the International Conference on Multimedia Modeling, Bangkok, Thailand, 5–7 February 2018; pp. 16–28. [Google Scholar]
- Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 3104–3112. [Google Scholar]
- Jaderberg, M.; Simonyan, K.; Zisserman, A. Spatial transformer networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 2017–2025. [Google Scholar]
- Marrocu, M.; Massidda, L. Performance Comparison between Deep Learning and Optical Flow-Based Techniques for Nowcast Precipitation from Radar Images. Forecasting 2020, 2, 194–210. [Google Scholar] [CrossRef]
- Jozefowicz, R.; Zaremba, W.; Sutskever, I. An empirical exploration of recurrent network architectures. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 2342–2350. [Google Scholar]
- Wang, Y.; Long, M.; Wang, J.; Gao, Z.; Philip, S.Y. Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 879–888. [Google Scholar]
- Wang, Y.; Gao, Z.; Long, M.; Wang, J.; Philip, S.Y. PredRNN++: Towards A Resolution of the Deep-in-Time Dilemma in Spatiotemporal Predictive Learning. In Proceedings of the International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 5123–5132. [Google Scholar]
- Tran, Q.K.; Song, S.K. Multi-channel weather radar echo extrapolation with convolutional recurrent neural networks. Remote Sens. 2019, 11, 2303. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Jiang, L.; Yang, M.H.; Li, J.; Long, M.; Li, F.F. Eidetic 3D LSTM: A Model for Video Prediction and Beyond. In Proceedings of the International Conference on Learning Representations, 900 Convention Center Blvd, New Orleans, LA, USA, 6–9 May 2019; pp. 1–14. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
- Lin, Z.; Feng, M.; Santos, C.N.D.; Yu, M.; Xiang, B.; Zhou, B.; Bengio, Y. A structured self-attentive sentence embedding. arXiv 2017, arXiv:1703.03130. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 3146–3154. [Google Scholar]
- Ba, J.L.; Kiros, J.R.; Hinton, G.E. Layer Normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
- Hogan, R.J.; Ferro, C.A.; Jolliffe, I.T.; Stephenson, D.B. Equitability revisited: Why the “equitable threat score” is not equitable. Weather Forecast. 2010, 25, 710–726. [Google Scholar] [CrossRef] [Green Version]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Layer | In Kernel | In Stride | Pad | State Ker. | Spatial Ker. | Ch I/O | In Res | Out Res | Type |
---|---|---|---|---|---|---|---|---|---|
Layer 1 | 5 × 5 | 1 × 1 | 2 × 2 | 5 × 5 | 5 × 5 | 16/32 | 32 × 32 | 32 × 32 | IDA-LSTM |
Layer 2 | 5 × 5 | 1 × 1 | 2 × 2 | 5 × 5 | 5 × 5 | 32/32 | 32 × 32 | 32 × 32 | IDA-LSTM |
Layer 3 | 5 × 5 | 1 × 1 | 2 × 2 | 5 × 5 | 5 × 5 | 32/32 | 32 × 32 | 32 × 32 | IDA-LSTM |
Layer 4 | 5 × 5 | 1 × 1 | 2 × 2 | 5 × 5 | 5 × 5 | 32/32 | 32 × 32 | 32 × 32 | IDA-LSTM |
Layer 5 | 1 × 1 | 1 × 1 | 0 × 0 | - | - | 32/16 | 32 × 32 | 32 × 32 | Conv2D |
dBZ Threshold | HSS ↑ | CSI ↑ | MAE ↓ | SSIM ↑ | ||||||
---|---|---|---|---|---|---|---|---|---|---|
5 | 20 | 40 | avg | 5 | 20 | 40 | avg | |||
ConvLSTM [4] | 0.7035 | 0.4819 | 0.1081 | 0.4312 | 0.7656 | 0.4034 | 0.0578 | 0.4089 | 15.06 | 0.2229 |
ConvGRU [6] | 0.6776 | 0.4766 | 0.1510 | 0.4351 | 0.7473 | 0.3907 | 0.0823 | 0.4068 | 16.27 | 0.1370 |
TrajGRU [6] | 0.6828 | 0.4862 | 0.1621 | 0.4437 | 0.7499 | 0.4020 | 0.0888 | 0.4136 | 15.99 | 0.1519 |
PredRNN [15] | 0.7080 | 0.4911 | 0.1558 | 0.4516 | 0.7691 | 0.4048 | 0.0839 | 0.4198 | 14.54 | 0.3341 |
PredRNN++ [16] | 0.7075 | 0.4993 | 0.1574 | 0.4548 | 0.7670 | 0.4137 | 0.0862 | 0.4223 | 14.51 | 0.3357 |
E3D-LSTM [18] | 0.7111 | 0.4810 | 0.1361 | 0.4427 | 0.7720 | 0.4060 | 0.0734 | 0.4171 | 14.78 | 0.3089 |
MIM [8] | 0.7052 | 0.5166 | 0.1858 | 0.4692 | 0.7628 | 0.4279 | 0.1034 | 0.4313 | 14.69 | 0.2123 |
DA-LSTM | 0.7184 | 0.5251 | 0.2127 | 0.4854 | 0.7765 | 0.4376 | 0.1202 | 0.4448 | 14.10 | 0.3479 |
IDA-LSTM | 0.7179 | 0.5264 | 0.2262 | 0.4902 | 0.7752 | 0.4372 | 0.1287 | 0.4470 | 14.09 | 0.3506 |
dBZ Threshold | HSS ↑ | CSI ↑ | MAE ↓ | SSIM ↑ | ||||||
---|---|---|---|---|---|---|---|---|---|---|
5 | 20 | 40 | avg | 5 | 20 | 40 | avg | |||
ConvLSTM | 0.7035 | 0.4819 | 0.1081 | 0.4312 | 0.7656 | 0.4034 | 0.0578 | 0.4089 | 15.06 | 0.2229 |
IConvLSTM | 0.7149 | 0.4889 | 0.1236 | 0.4424 | 0.7769 | 0.4119 | 0.0667 | 0.4184 | 14.62 | 0.3390 |
IConvLSTM | 0.7055 | 0.5001 | 0.1215 | 0.4424 | 0.7668 | 0.4120 | 0.0652 | 0.4146 | 14.42 | 0.3365 |
IConvLSTM | 0.7092 | 0.4740 | 0.1247 | 0.4360 | 0.7784 | 0.4118 | 0.0671 | 0.4191 | 15.11 | 0.3372 |
IConvLSTM | 0.5645 | 0.4044 | 0.0830 | 0.3503 | 0.6305 | 0.3362 | 0.0453 | 0.3373 | 20.65 | 0.3111 |
IPredRNN | 0.7081 | 0.4911 | 0.1558 | 0.4516 | 0.7691 | 0.4048 | 0.0854 | 0.4198 | 14.54 | 0.3341 |
IPredRNN | 0.7133 | 0.5108 | 0.2047 | 0.4762 | 0.7685 | 0.4188 | 0.1151 | 0.4341 | 14.03 | 0.3488 |
IPredRNN | 0.7081 | 0.5039 | 0.1531 | 0.4550 | 0.7710 | 0.4154 | 0.0836 | 0.4233 | 14.40 | 0.3312 |
IPredRNN | 0.7001 | 0.5179 | 0.1951 | 0.4710 | 0.7710 | 0.4289 | 0.1089 | 0.4359 | 14.52 | 0.3281 |
IPredRNN | 0.7111 | 0.5019 | 0.2155 | 0.4762 | 0.7726 | 0.4101 | 0.1218 | 0.4348 | 14.20 | 0.3327 |
IPredRNN++ | 0.7075 | 0.4993 | 0.1575 | 0.4548 | 0.7670 | 0.4137 | 0.0862 | 0.4223 | 14.51 | 0.3357 |
IPredRNN++ | 0.7188 | 0.5100 | 0.2004 | 0.4764 | 0.7759 | 0.4251 | 0.1124 | 0.4378 | 14.13 | 0.3513 |
IPredRNN++ | 0.7119 | 0.5037 | 0.2098 | 0.4751 | 0.7715 | 0.4204 | 0.1181 | 0.4367 | 14.33 | 0.3423 |
IPredRNN++ | 0.7023 | 0.4995 | 0.1610 | 0.4543 | 0.7665 | 0.4110 | 0.0882 | 0.4219 | 14.59 | 0.3255 |
IPredRNN++ | 0.7153 | 0.4968 | 0.2172 | 0.4764 | 0.7774 | 0.4239 | 0.1234 | 0.4416 | 14.62 | 0.3487 |
DA-LSTM | 0.7185 | 0.5251 | 0.2127 | 0.4854 | 0.7765 | 0.4376 | 0.1202 | 0.4448 | 14.10 | 0.3479 |
IDA-LSTM | 0.7093 | 0.5065 | 0.1606 | 0.4588 | 0.7683 | 0.4218 | 0.0881 | 0.4261 | 14.38 | 0.3345 |
IDA-LSTM | 0.7179 | 0.5264 | 0.2262 | 0.4902 | 0.7752 | 0.4372 | 0.1287 | 0.4470 | 14.09 | 0.3506 |
IDA-LSTM | 0.7179 | 0.5156 | 0.1879 | 0.4738 | 0.7798 | 0.4342 | 0.1044 | 0.4395 | 14.18 | 0.3436 |
IDA-LSTM | 0.7068 | 0.5085 | 0.1930 | 0.4694 | 0.7631 | 0.4125 | 0.1081 | 0.4279 | 14.21 | 0.3461 |
Model | HSS ↑ | CSI ↑ | MAE ↓ | SSIM ↑ | ||||||
---|---|---|---|---|---|---|---|---|---|---|
5 | 20 | 40 | avg | 5 | 20 | 40 | avg | |||
PredRNN | 0.7081 | 0.4911 | 0.1558 | 0.4516 | 0.7691 | 0.4048 | 0.0854 | 0.4198 | 14.54 | 0.3341 |
SA-LSTM | 0.7042 | 0.4982 | 0.1481 | 0.4502 | 0.7689 | 0.4143 | 0.0808 | 0.4213 | 14.68 | 0.3241 |
CA-LSTM | 0.7115 | 0.5066 | 0.1575 | 0.4585 | 0.7733 | 0.4172 | 0.0861 | 0.4255 | 14.23 | 0.3296 |
DA-LSTM | 0.7185 | 0.5251 | 0.2127 | 0.4854 | 0.7765 | 0.4376 | 0.1202 | 0.4448 | 14.10 | 0.3479 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Luo, C.; Li, X.; Wen, Y.; Ye, Y.; Zhang, X. A Novel LSTM Model with Interaction Dual Attention for Radar Echo Extrapolation. Remote Sens. 2021, 13, 164. https://doi.org/10.3390/rs13020164
Luo C, Li X, Wen Y, Ye Y, Zhang X. A Novel LSTM Model with Interaction Dual Attention for Radar Echo Extrapolation. Remote Sensing. 2021; 13(2):164. https://doi.org/10.3390/rs13020164
Chicago/Turabian StyleLuo, Chuyao, Xutao Li, Yongliang Wen, Yunming Ye, and Xiaofeng Zhang. 2021. "A Novel LSTM Model with Interaction Dual Attention for Radar Echo Extrapolation" Remote Sensing 13, no. 2: 164. https://doi.org/10.3390/rs13020164