Next Article in Journal
Study on Thermal Storage Wall Heating System of Traditional Houses in Cold Climate Zone of China: A Case Study in Southern Shaanxi
Next Article in Special Issue
Characteristics and Variations of Raindrop Size Distribution in Chengdu of the Western Sichuan Basin, China
Previous Article in Journal
A Novel Hybrid Model Combining the Support Vector Machine (SVM) and Boosted Regression Trees (BRT) Technique in Predicting PM10 Concentration
Previous Article in Special Issue
Research on the Effectiveness of Deep Convolutional Neural Network for Electromagnetic Interference Identification Based on I/Q Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PRSOT: Precipitation Retrieval from Satellite Observations Based on Transformer

1
Joint Center of Data Assimilation for Research and Application, School of Atmospheric Sciences, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Department of Control Science and Engineering, Shandong University, Jinan 250061, China
3
Shandong Research Institute of Industrial Technology, Jinan 250061, China
4
School of Atmospheric Sciences, Nanjing University of Information Science and Technology, Nanjing 210044, China
5
College of Computer Science, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Atmosphere 2022, 13(12), 2048; https://doi.org/10.3390/atmos13122048
Submission received: 20 October 2022 / Revised: 30 November 2022 / Accepted: 2 December 2022 / Published: 7 December 2022
(This article belongs to the Special Issue Identification and Optimization of Retrieval Model in Atmosphere)

Abstract

:
Precipitation with high spatial and temporal resolution can improve the defense capability of meteorological disasters and provide indispensable instruction and early warning for social public services, such as agriculture, forestry, and transportation. Therefore, a deep learning-based algorithm entitled precipitation retrieval from satellite observations based on Transformer (PRSOT) is proposed to fill the observation gap of ground rain gauges and weather radars in deserts, oceans, and other regions. In this algorithm, the multispectral infrared brightness temperatures from Himawari-8, the new-generation geostationary satellite, have been used as predictor variables and the Global Precipitation Measurement (GPM) precipitation product has been employed to train the retrieval model. We utilized two data normalization schemes, area-based and pixel-based normalization, and conducted comparative experiments. Comparing the estimated results with the GPM product on the test set, PRSOT_Pixel_based model achieved a Probability Of Detection (POD) of 0.74, a False Alarm Ratio (FAR) of 0.44 and a Critical Success Index (CSI) of 0.47 for two-class metrics, and an Accuracy (ACC) of 0.75 for multi-class metrics. Pixel-based normalization is more suitable for meteorological data, highlighting the precipitation characteristics and obtaining better comprehensive retrieval performance in visualization and evaluation metrics. In conclusion, the proposed PRSOT model has made a remarkable and essential contribution to precipitation retrieval and outperforms the benchmark machine learning model Random Forests.

1. Introduction

Precipitation is a crucial component of the global water cycle and regulates the energy and overall climate systems [1]. Due to global warming, extreme precipitation is becoming more frequent and intense [2]. Extreme precipitation can trigger floods, landslides, mudslides, and other natural disasters, resulting in rather severe direct and indirect losses [3,4,5]. Therefore, effective precipitation monitoring can enhance the ability to defend against meteorological disasters, give essential instruction and early warning for public social services, and reduce economic and human losses [6,7].
The traditional ground-based observations for precipitation are mostly rain gauges and weather radars, which are widely applied in the actual meteorological forecasting business due to their high precision. However, restricted by natural and economic conditions, the ground-based observations suffer from uneven distribution, limited coverage, and insufficient spatial representation, especially in high mountains, vast water areas, and undeveloped regions [8]. In addition, the rain gauge observations possess the problems of spatiotemporal discontinuity and data irregularity. Even if station data are interpolated in space and time, it is challenging to accurately portray precipitation’s spatial and temporal distribution [9,10]. Rain trail attenuation, the unstable relationship between reflectivity factor and precipitation (Z-R), and terrain occlusion introduce uncertainty into radar estimated precipitation [11,12]. Therefore, rain gauges and weather radars may not accurately represent the characteristics of large-area precipitation distribution. Unlike rain gauges and weather radars, meteorological satellites can conduct thorough observations from top to bottom, regardless of terrain or environmental circumstances. Consequently, satellite observations possess significant coverage and spatiotemporal continuity advantages, and are currently the only feasible technology for fine-grained monitoring of regional or even global precipitation [13,14].
The precipitation retrieval methods by satellites can be divided into three categories based on the satellite detectors: microwave remote sensing, visible/infrared (VIS/IR) remote sensing, and multi-sensor joint retrieval products. Microwaves can penetrate clouds and detect information which has more direct relevance with precipitation, such as temperature and humidity [15,16]. Consequently, the precipitation products own high quality. However, microwave sensors are usually onboarded satellites in low earth orbit. Therefore, they suffer from coverage gaps and low sampling frequencies. VIS/IR remote sensing can provide high temporal and spatial resolution precipitation products. However, VIS remote sensing is only applicable during the daytime due to the constraints of sunlight. IR retrieval products have low accuracy since the cloud-top information detected by infrared remote sensing is indirectly related to surface rain rates [17,18]. Multi-sensor joint retrieval products combine the benefits of different observations to improve accuracy, coverage, and resolution, albeit with long time lag that limits their applications to real-time tasks such as nowcasting. Microwave remote sensing, VIS remote sensing, and multi-sensor jointly retrieval products have unavoidable limitations, so more research has focused on optimizing IR retrieval algorithms to improve the accuracy of regional and global precipitation.
According to different methods of processing brightness temperature (TBB), retrieval algorithms can be divided into three categories: pixel-based [19,20,21,22], pane-based [23], and cloud-based [18,24,25,26] algorithms. The pixel-based algorithm estimates precipitation on a pixel just based on TBBs at that pixel, such as the Geostationary Operational Environmental Satellite (GOES) Precipitation Index (GPI) [19], Auto-estimator [20], Hydro-estimator [21], and GOES MultiSpectral Rainfall Algorithm (GMSRA) [22]. The pane-based algorithm estimates precipitation on a pixel based on TBBs at that pixel and nearby pixels considering the influence of surrounding pixels on the center pixel. For example, Wu et al. [23] extracted radiance and textural features in 20 km × 20 km pane to establish a relationship with central pixel precipitation. The cloud-based algorithm divides the cloud block, extracts characteristic information from the cloud block, and establishes a relationship model between this specific information and precipitation. Typical algorithms include Griffith–Woodley technology [24] and Convective-Stratiform Technology (CST) [25]. Traditional algorithms generally use linear, quadratic, power, and exponential functions, which cannot accurately describe the complex infrared-precipitation relationship, so improving the retrieval accuracy is difficult. Therefore, other innovative methods need to be introduced to overcome these deficiencies.
In recent years, the rise of machine learning (ML) has provided a new direction for infrared precipitation retrieval. Various ML techniques, such as Artificial Neural Networks (ANN) [27,28,29,30], Support Vector Machines (SVM) [31,32,33], Gradient Boosting Trees (GBT) [34], Random Forests (RF) [35,36,37,38,39], and Convolutional Neural Networks (CNN) [40], have been successfully and extensively applied. For example, Hsu et al. [27] developed Precipitation Estimation from Remotely Sensed Information using ANN (PERSIANN) model, which greatly improved the estimation performance of diverse precipitation characteristics in different geographical regions and times. The upgrade versions, PESIANN-Cloud Classification System (PESIANN-CCS) [28] and PESIANN-Multispectral Analysis (PESIANN-MSA) [29], were further developed to improve accuracy. Ma et al. [34] implemented IR precipitation retrieval using the GBT, which was trained separately in three typical scenarios: daytime, twilight and nighttime. Min et al. [37] used the RF to train classification and regression models for predicting non-precipitation/precipitation pixel and rain rates, respectively, using IR data from Himawari-8 and numerical weather prediction data from the global forecast system. Wang et al. [40] first investigated the effectiveness of CNN and developed an algorithm entitled IR precipitation estimation using a CNN (IPEC) based on five-channel IR data. These efforts show that ML techniques achieved better retrieval performance than traditional algorithms. However, the retrieval capacity for heavy rain, which is more prone to disasters, is still limited.
The Transformer, a deep learning technique, has received increasing attention for traditional challenging problems and shows the potential for solving the problem of IR precipitation retrieval. Transformer is initially known for its success in natural language processing (NLP). Vaswani et al. [41] first proposed the Transformer model, which completely utilizes the attention mechanism, for machine translation. Devlin et al. [42] design Bidirectional Encoder Representation from Transformers (BERT) model, train it on a specific training task to obtain bidirectional characteristics, and then fine-tune it for numerous different tasks, such as verbal reasoning and question-and-answer. BERT achieves state-of-the-art performance on various NLP tasks of the time. Then, a wide spectrum of Transformer-based models emerged, such as Generative Pre-Training (GTP) [43], Transformer-xl [44], and TinyBERT [45]. These models show strong representation capacity and have made a breakthrough in NLP area. Inspired by the power of Transformer in NLP, researchers began to explore the application of Transformer in computer vision (CV) [46,47,48,49]. For example, Dosovitskiy et al. [49] proposed the Vision Transformer (ViT) model for image recognition that achieves better performance than CNNs. In general, Transfomer-based models have been widely adopted in NLP and CV areas and shown competitive and even better performance than convolutional networks and recurrent networks. It calls for the application of Transformer in IR precipitation retrieval. Real-time high-quality precipitation retrieval products can effectively fill traditional ground-based observation gap, facilitate the analysis of large-scale precipitation, and can be used for real-time applications, such as nowcasting. Cumulative high-quality precipitation retrieval products can be used to characterize the interannual and interdecadal variabilities of precipitation.
Thus, we propose a deep learning algorithm entitled Precipitation Retrieval from Satellite Observations based on Transformer (PRSOT) in this paper. Based on Himawari-8 infrared spectrum data, the PRSOT estimates the precipitation rates at half hourly intervals and 0.05 × 0.05 resolution. The performance of PRSOT is compared to Random Forests (RF), a widely used ML algorithm for IR precipitation retrieval [35,36,37,38,39]. The proposed algorithm can generate precipitation distributions in near real-time based on satellite observations, solving the time-lag problem of multi-sensor joint retrieval products. The model estimations can effectively identify precipitation areas and help to fill the ground observation gap in areas such as the ocean.
This article is organized as follows. Section 2 carefully describes the datasets, PRSOT algorithm, and evaluation metrics. Section 3 shows the significant results of PRSOT from various aspects. The performance of the proposed algorithm in space and time is further evaluated in Section 4. Finally, the conclusion and recommended future works are presented in Section 5.

2. Materials and Methods

2.1. Materials

For geostationary satellite data, we use high-spatiotemporal resolution multispectral data from Japan’s Himawari-8 satellite, a new generation of geostationary meteorological satellites operated by the Japan Meteorological Agency (JMA). Considering that the VIS bands are inevitably impacted by sunlight, we only employ the TBBs obtained from IR bands to establish all-weather model. At the same time, the radiation transmission will be affected by the satellite zenith angle (SAZ) and the solar zenith angle (SOZ), which will cause geographical differences in satellite observations. Therefore, we used TBBs from IR bands and two angle data (SAZ and SOZ). The data have a horizontal resolution of 0.05 × 0.05 and a temporal resolution of 10 min.
For precipitation data, we used the Global Precipitation Measurement (GPM) product, which is provided by the National Aeronautics and Space Administration (NASA) and usually released about 3.5 months after the observation. Due to the retrieval algorithm, Integrated Multi-satellite Retrievals for GPM (IMERG), combining microwave sensors and infrared sensors, the new generation of GPM precipitation products has higher accuracy, more extensive coverage, higher temporal (30 min) and spatial (0.1 × 0.1 ) resolution. IMERG satellite precipitation products include early (IMERG_E), late (IMERG_L) and final (IMERG_F) run product. The IMERGE_F product is superior to other IMERGE products in practical applications, such as peak flow capture ability [50] and hydrological simulations of streamflow [51]. Compared with other high-resolution precipitation products, the IMERGE_F product is more accurate in extreme precipitation detection and precipitation intensity estimation [52].
The geographical range of this study was selected in southeast China (107.75 E–124.75 E and 17.75 N–34.75 N), as shown in Figure 1. The study area is located in the eastern region of the Eurasian continent and the west coast of the Pacific Ocean. The enormous sea–land thermal difference creates a typical monsoon climate. In summer, the southeast and southwest monsoon bring large amounts of water vapor, prompting abundant precipitation. The sufficient amount and diversity of precipitation events in the study area could meet the data requirement to train a deep learning model and allow the model to learn the IR-precipitation nonlinear relationship. Therefore, the constructive process of PRSOT is conducted with the data from June and July 2018 as the training data set and the data from August 2018 as the test data set.

2.2. Data Processing

First, we matched Himawari-8 satellite data and GPM precipitation products in space and time. Spatially, we used bilinear interpolation to increase the resolution of precipitation product from 0.1 × 0.1 to 0.05 × 0.05 , matching the satellite data pixel-to-pixel. In terms of time, the instantaneous observations of the geostationary satellite at intermediate times within half an hour correspond to the cumulative amount of precipitation products in the same period. For example, the instantaneous observation of Himawari-8 at 0020 UTC and 0050 UTC matches the cumulative precipitation of GPM IMERG from 0000 UTC to 0030 UTC and from 0030 UTC to 0100 UTC, respectively.
Secondly, we divided the precipitation data into No Rain, Light Rain, Moderate Rain, and Heavy Rain according to previous research [53]. The specific classification basis is shown in Table 1. After statistics, we found an obvious classes imbalance problem, which is not conducive to the training of deep learning models. Figure 2 shows that the proportion of No Rain is as high as 81.9%, and the proportion of Rain is less than 20% in raw data. At the same time, the proposed model is pixel-based and abundant adjacent pixels are similar, which may lead to data redundancy problems. Therefore, it is necessary to downsample the training set to reduce the amount of data while alleviating the classes imbalance problem. Finally, we set the ratio of No Rain, Light Rain, Moderate Rain, and Heavy Rain to 1.25:1.25:1:1 when training the model. It is worth mentioning that we also noticed a clear trend over time for the rainy samples. It can be seen from Figure 3 that there is more rainfall from afternoon to evening (0600UTC–1100UTC), and less rainfall at other times, showing overall fluctuation characteristics. Therefore, when sampling the original data, we sample each category in different proportions at each moment to ensure the balance of categories while retaining the diurnal characteristics.
Finally, it is also necessary to normalize the data to facilitate model training. We employ two normalization schemes. Scheme 1, named Area-based normalization, for the variable x ( t i m e , l a t , l o n ) , traverse all pixels at all times in the entire datasets and find the global maximum value x M a x _ a r e a and minimum value x M i n _ a r e a . For the pixels of longitude i and latitude j, the normalization formula is as follows:
x i j = x i j x M i n _ a r e a x M a x _ a r e a x M i n _ a r e a
Scheme 2, named Pixel-based normalization, for the variable x ( t i m e , l a t , l o n ) , traverses all times on different pixels to find the corresponding maximum value x M a x _ p i x e l and minimum value x M i n _ p i x e l , and different pixels have different maximum and minimum values. For the pixels of longitude i and latitude j, the normalization formula is as follows:
x i j = x i j x M i n _ i j x M a x _ i j x M i n _ i j

2.3. Methods

We build a pixel-to-pixel attention mechanism model PRSOT for quantitative precipitation retrieval. The model’s general framework is depicted in Figure 4, mainly including the Input Layer, Encoding Layer, and Output Layer.
In the Input Layer, we take twelve variable values (TBBs and angles) on a pixel, one-dimensional sequence, as input. Similar to the class token of ViT [49] and BERT [42], we add precipitation token P_token to the input variable sequence and initialize it with 0.1. The P_token’s state at the output of the Encoding Layer serves as the rainfall representation.
The Encoding Layer mainly includes a Linear Mapping Layer and N Attention Blocks. Linear Mapping Layer can map the input to a high-dimensional space. Then, Attention Blocks encode it, learn the relationship between P_token and other variables, and focus on crucial information. Attention Block consists of Multi-Head Attention (MHA) and Feed-Forward Networks (FFN). Layernorm (LN) and residual connection are applied after MHA and FFN to avoid model overfitting, vanishing gradient and exploding gradient problems. The mainframe of the Encoding Layer is the same as the encoder of the Transformer [41].
In the Output Layer, we only take the part of the output of the Encoding Layer corresponding to the P_token, which contains information about rainfall extracted from the input variable values, then input P_token into the Fully Connected Layer (FC) for the final precipitation estimation. It is important to note that our model results are quantitative rainfall (mm/h) and not precipitation grades.
More details about the Encoding Layer are described below. After Linear Mapping Layer, the input is mapped to the D1 dimension and added to the position embedding (Equation (3)). The Multi-Head Attention consists of M Scaled Dot-Product Attention. By Equations (4)–(6), Query (Q), Key (K), and Value (V) are obtained, respectively. In meteorology, Q stands for variable properties, such as thermal, dynamic, and water vapor conditions. K represents the variable number, and different properties contain multiple variables. V represents the amount of information, that is, the contribution to the precipitation. The weighted summation is then carried out by Equation (7). The weights are called the attention scores. Different Scaled Dot-Product Attention will pay attention to different information, so it is necessary to concatenate M Scaled Dot-Product Attention (Equation (8)). Finally, the final output is obtained through the Feed-Forward Networks by Equation (9).
Y 0 = E x + E p o s , E x R n u m _ v a r × D 1 , E p o s R n u m _ v a r × D 1
Q = Y 0 W Q , W Q R ( D 1 × D 2 )
K = Y 0 W K , W K R ( D 1 × D 2 )
V = Y 0 W V , W V R ( D 1 × D 2 )
A t t e n t i o n ( Q , K , V ) = s o f t m a x Q K T D 2 V
M u l t i H e a d = C o n c a t ( h e a d 1 , , h e a d M ) W O , W O R ( D 2 × M ) × D 1
F F N ( x ) = m a x ( 0 , x W F 1 + b 1 ) W F 2 + b 2 , W F 1 R D 1 × D 3 , W F 2 R D 3 × D 1

2.4. Implementation Details

For parameters N, M, D 1 , D 2 , and D 3 , we set them to 4, 8, 64, 8, and 512, respectively. We used Mean Squared Error (MSE) as the loss function and conducted comparative experiments according to the two normalization schemes described in Section 2.2, denoted as PRSOT_Area_based model and PRSOT_Pixel_based model, respectively. In the training phase, the samples used by the two models are the same in time, longitude, and latitude, but the normalization method is different. We also use the RF regression, a widely used machine learning method, as a benchmark model. RF regression model uses the same training and test sets as the PRSOT_Area_based model. We used 2000 samples from the training set to tuned the number of decision trees. Finally, we took 50 decision trees as the optimal RF regression model due to the limitations of the hardware device.
We evaluate the models’ performance for precipitation retrieval according to two-class classification and multi-class classification evaluation metrics. We take 0.1 mm/h as the bound for two-class classification to classify the Model Estimates and Ground Truth (GPM IMERG Final Precipitation L3 product) into No Rain and Rain. We introduced the Probability Of Detection (POD), False Alarm Ratio (FAR), and Critical Success Index (CSI). The calculation formulas are as follows:
P O D = A A + C
F A R = B A + B
C S I = A A + B + C
A indicates that the Ground Truth is Rain and the Model Estimates Rain, B indicates that the Ground Truth is Rain, the Model Estimates No Rain, C indicates that the Ground Truth is No Rain, the Model Estimates Rain, and D indicates that the Ground Truth is No Rain, and the Model Estimates No Rain. POD represents the probability that the model estimates accurately in the samples with the Ground Truth of Rain. If the model can estimate all such samples accurately, the POD value is 1. FAR represents the probability that the model estimates incorrectly in the samples with the Model Estimates of Rain. The higher the FAR, the more errors in the model estimates and the worse the performance. CSI represents the probability that the model estimates accurately in the samples where the Model Estimate or Ground Truth is rain. If POD is high and FAR is low, CSI is high, and model performance is good. If POD is high, but FAR is high, then CSI is low, and model performance is worse. CSI is more comprehensive than a single index POD or FAR. For multi-class classification, we introduced the Accuracy (ACC) metric and Confusion Matrix. The ACC calculation formula is as follows:
A C C = 1 N i = 1 N I ( P R M E = P R G T )
where P R M E is the Model Estimates, P R G T is the Ground Truth, N is the number of samples in the whole test set, and I is the indicator function. We divide the Model Estimation and Ground Truth into four categories according to Table 1. When the class of Model Estimation is consistent with the Ground Truth, I sets 1; otherwise, it stets 0. ACC represents the proportion of correct estimations to the total number of samples.

3. Results

3.1. Evaluation Metrics

During the training period, the change curves of the loss functions are shown in Figure 5. Overall, the curves show a downward trend. In the later stage, the decline is significantly slow and almost stagnant, indicating that the models have reached convergence. Then we test the trained models with August 2018 data.
The calculation results of all metrics are shown in Table 2. The POD of RF regression model is as high as 0.97, while on the other metrics, PRSOT model achieves better results. If the model estimates all samples (including No Rain and Rain) as Rain, POD is 100%. Thus, POD is one-sided, focusing on Rain samples and ignoring No Rain samples. Therefore, although the POD of RF regression model is high, the FAR is also high, which means a large number of samples of No Rain are incorrectly estimated to be Rain, which leads to low CSI and ACC. Compared to the RF regression model, the FAR of the PRSOT_Area_based model is reduced by 0.15, the CSI is increased by 0.13 and the ACC is increased by 0.25. The PRSOT_Pixel_based model obtained higher ACC and CSI of 0.45 and 0.65, respectively. The PRSOT model can effectively identify precipitation areas and is superior to RF regression model, a machine learning benchmark model.
Further analyzing the ability of the PRSOT model for different precipitation grades, we calculate the Confusion Matrix, as shown in Figure 6. The PRSOT_Area_based model estimates 20% of the No Rain samples are Rain, showing a significant overestimation for No Rain class. PRSOT_Pixel_based model can accurately estimate 87% of the No Rain samples, which is a highly outstanding improvement compared to PRSOT_Area_based model. For Light Rain and Moderate Rain, both models still have underestimation and overestimation, and PRSOT_Area_based model has better performance. For Heavy Rain, both models have the problem of underestimating, but PRSOT_Pixel_based model is relatively good. In general, PRSOT_Pixel _based model can better distinguish between No Rain and Rain, and has a more accurate estimation of heavy rain which can cause greater disasters.

3.2. Visual Comparison

We visually compare Model Estimates and Ground Truth. Figure 7 show examples at 0500 UTC 01, 1530 UTC 09, 0930 UTC 21 and 0430 UTC 25 August 2018. The Model Estimates of RF regression model have large rainfall area, corresponding to the previous results on the evaluation metrics (POD, FAR, CSI, and ACC). PRSOT_Area_based model can alleviate this problem and is more consistent with GPM precipitation. PRSOT_Pixel_based model can further accurately identify the precipitation area, as shown in the red box in Figure 7c1–c4. At the same time, the prediction ability of heavy rain has been improved, as shown in the red box in the Figure 7a1–a4. However, PRSOT_Pixel_based model still has the defects of subtle differences in precipitation regions and inaccurate estimation grades. PRSOT_Pixel_based model tends to overestimate some Moderate Rain samples as Heavy Rain, consistent with Confusion Matrix results. This feature is more pronounced in the sea area. The distribution trends of Model Estimates and GPM precipitation are consistent, but the estimated grades are different in some regions. The PRSOT_Pixel_based model has the ability to invert precipitation on sea area, which can further supplement ground observations.

4. Discussion

We further compare the differences between the two normalization methods. After Area-based normalization, different pixels have different ranges of values. Figure 8 shows the maximum and minimum value distributions of TBB fome band 8 after Area-based normalization. The range of TBB is 0.3–1.0 in the northerly region, but 0.0–0.8 in the southerly region. For the same variable, the same value represents different physical meanings at different pixels. For example, when the relative humidity reaches 70%, it indicates that the water vapor conditions are better and prone to precipitation in the north, while it is more difficult to form precipitation in the south. Therefore, there are still obvious geographical differences in the data after area-based normalization, which makes precipitation retrieval more difficult. We also perform a visual comparative analysis of Area-based normalization and Pixel-based normalization. In satellite observations, band 8 and band 9 represent the water vapor information in the middle layer, and band 16 represents the cloud top height (CTH) information. By comparison, the color of the precipitation area is more vivid and prominent after Pixel-based normalization, which can further strengthen the local characteristics of water vapor and clouds (Figure 9). Thus, Pixel-based normalization is conducive to the model to make better estimates.
The proposed PRSOT model has certain advantages. We apply the Transformer model to infrared precipitation retrieval for the first time. It can generate precipitation distribution in near real-time based on satellite observations and solve the time-lag problem of GPM. Compared with RF regression model, our proposed PRSOT_Pixel_based model improves the CSI and ACC metrics by 0.17 and 0.32, respectively. At the same time, the estimated results are consistent with the GPM distribution, which has a good reference value in practical applications. We also further evaluate the estimation ability of the model both spatially and temporally. Spatially, we calculated the correlation coefficient between Model Estimates and Ground Truth. The spatial distributions are shown in Figure 10. Most of the regional correlation coefficients can reach more than 0.5, indicating that the Model Estimates are more consistent with the Ground Truth. In the five regions of A, B, C, D, and E in Figure 10, the correlation coefficient between Model Estimates and Ground Truth can reach 0.7 for both models. Temporally, we count the diurnal variation of ACC (Figure 11). Compared with PRSOT_Area_based model, the ACC of PRSOT_Pixel_based model is greatly improved. In the two time periods of 00:00–04:00 and 14:00–23:30, the ACC of PRSOT_Pixel_based model is higher than the average 0.75, and the Model Estimates are better; in 02:30–13:30, ACC is below the mean and the Model Estimates are biased. In the region where the correlation coefficient is greater than 0.7, and at the moment ACC higher than the average value, the Model Estimates are more meaningful.
However, our model still has shortcomings. On the one hand, it is not enough for the model’s input to only consider the information of clouds and water vapor. The generation of precipitation often requires dynamic conditions. For example, the coupling of low-level convergence and high-level divergence is conducive to the transportation, lifting, and condensation of water vapor. Therefore, it is necessary to introduce dynamic features such as wind and height fields into the model to optimize the model’s input further. On the other hand, our model is pixel-to-pixel, ignoring the influence of the surrounding environment on the center grid. CNN is more suitable for extracting spatial features than the attention mechanism. Therefore, combining CNN with the attention mechanism is expected to improve the estimation accuracy of the model further.

5. Conclusions

We established a pixel-to-pixel attention mechanism model—PRSOT—for quantitative precipitation retrieval using the Himawari-8 satellite observation data in the summer of 2018. We take the GPM IMERG Final Precipitation L3 product as the ground truth during training. After statistical analysis, we found that the original data have noticeable imbalance problems and marked diurnal characteristics. Therefore, down-sampling is applied to the original data to address the imbalance and preserve the diurnal characteristics on the training set (June and July). We designed two normalization schemes for comparative experiments, PRSOT_Area_based and PRSOT_Pixel_based models.
We divided the Model Estimates and Ground Truth into multiple grades according to meteorological criteria. Then we introduced various evaluation metrics, including two-class (POD, FAR, and CSI) and multi-class (ACC) metrics, to evaluate the models’ retrieval performance on the test set (August). The RF regression model is more inclined to give estimates of “Rain”. It was shown that the enormous rainy area in visualization corresponded tp high POD, high FAR, low CSI, and low ACC in metrics. The proposed PRSOT model is more accurate in estimating “No Rain”. Visually, the rainy area is significantly reduced, more consistent with GPM. PRSOT_Pixel_based model can further accurately identify the precipitation area and Heavy Rain. Moreover, compared with RF, PRSOT_Pixel_based model improves the CSI and ACC metrics by 0.17 and 0.32, respectively. Pixel-based normalization can consider the geographical differences of different samples, and because of that, the precipitation characteristics are more prominent.
Overall, our proposed pixel-to-pixel algorithm can effectively identify precipitation areas and avoid the time-lag problem of multi-sensor joint retrieval products. Better yet, we plan to introduce CNN that can effectively capture spatial features to optimize the deep learning algorithm, and add more precipitation-related dynamic factors to improve retrieval accuracy in the future. In addition, it is essential to train ML retrieval model based on multi-year data, introducing inter-annual features, to better generalization performance.

Author Contributions

Conceptualization, S.Y. and J.Z.; methodology, Z.J.; software, Z.J.; validation, S.Y., J.Z. and C.B.; formal analysis, Z.J.; investigation, Z.J.; resources, J.Z.; data curation, Y.Z.; writing—original draft preparation, Z.J.; writing—review and editing, J.Z., K.X. and Z.Y.; visualization, Z.J.; supervision, S.Y. and J.Z.; project administration, J.Z.; funding acquisition, J.Z. All authors read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Taishan Scholars Program, the National Key Research and Development Program of China under Grant 2018YFE0126100 and the Key Research and Development Program of Jiangsu Province under Grant BE2021093.

Data Availability Statement

The Himawari-8 satellite observation data can be accessed from http://www.eorc.jaxa.jp/ptree/index.html (accessed on 1 March 2022). GPM IMERG Final Precipitation L3 product can be accessed from https://disc.gsfc.nasa.gov/datasets/GPM_3IMERGHH_06/summary (accessed on 1 March 2022).

Acknowledgments

We gratefully thank JMA for freely offering the Himawari-8 satellite data and NASA for providing the IMERG V06B Final-Run products.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACCAccuracy
ANNArtificial Neural Networks
BERT                     Bidirectional Encoder Representation from Transformers
CNNConvolutional Neural Networks
CSICritical Success Index
CSTConvective-Stratiform Technology
CTHCloud Top Height
CVComputer Vision
FARFalse-Alarm Ratio
FCFully Connected Layer
FFNFeed-Forward Networks
GBTGradient Boosting Trees
GMSRAGOES MultiSpectral Rainfall Algorithm
GOESGeostationary Operational Environmental Satellite
GPIGOES Precipitation Index
GPMGlobal Precipitation Measurement
GTPGenerative Pre-Training
IMERGIntegrated Multi-satellite Retrievals for GPM
IMERG_EIMERG Early Run Product
IMERG_LIMERG Late Run Product
IMERG_FIMERG Final Run Product
IPECIR Precipitation Estimation using a CNN
IRInfrared
JMAJapan Meteorological Agency
LNLayernorm
MHAMulti-Head Attention
MLMachine Learning
MSEMean Squared Error
NASANational Aeronautics and Space Administration
NLPNatural Language Processing
PERSIANNPrecipitation Estimation from Remote Sensed Information using ANN
PERSIANN-CCSPERSIANN-Cloud Classification System
PERSIANN-MSAPERSIANN-Multispectral Analysis
PODProbability Of Detection
RFRandom Forest
SAZSatellite Zenith Angle
SOZSolar Zenith Angle
SVMSupport Vector Machines
TBBBlackbody Temperature
ViTVision Transformer
VISvisual

References

  1. Kummerow, C.; Barnes, W.; Kozu, T.; Shiue, J.; Simpson, J. The tropical rainfall measuring mission (TRMM) sensor package. J. Atmos. Ocean. Technol. 1998, 15, 809–817. [Google Scholar] [CrossRef]
  2. Seneviratne, S.I.; Zhang, X.; Adnan, M.; Badi, W.; Dereczynski, C.; Di Luca, A.; Ghosh, S.; Iskandar, I.; Kossin, J.; Lewis, S.; et al. Weather and Climate Extreme Events in a Changing Climate. In Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2021; pp. 1513–1766. [Google Scholar]
  3. Wilhelmi, O.V.; Morss, R.E. Integrated analysis of societal vulnerability in an extreme precipitation event: A Fort Collins case study. Environ. Sci. Policy 2013, 26, 49–62. [Google Scholar] [CrossRef]
  4. Bevacqua, E.; Vousdoukas, M.I.; Zappa, G.; Hodges, K.; Shepherd, T.G.; Maraun, D.; Mentaschi, L.; Feyen, L. More meteorological events that drive compound coastal flooding are projected under climate change. Commun. Earth Environ. 2020, 1, 1–11. [Google Scholar] [CrossRef] [PubMed]
  5. Guerreiro, S.B.; Glenis, V.; Dawson, R.J.; Kilsby, C. Pluvial flooding in European cities—A continental approach to urban flood modelling. Water 2017, 9, 296. [Google Scholar] [CrossRef] [Green Version]
  6. Panegrossi, G.; Casella, D.; Dietrich, S.; Marra, A.C.; Sanò, P.; Mugnai, A.; Baldini, L.; Roberto, N.; Adirosi, E.; Cremonini, R.; et al. Use of the GPM constellation for monitoring heavy precipitation events over the Mediterranean region. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2733–2753. [Google Scholar] [CrossRef]
  7. Yu, Y.; Wang, J.; Cheng, F.; Deng, H.; Chen, S. Drought monitoring in Yunnan Province based on a TRMM precipitation product. Nat. Hazards 2020, 104, 2369–2387. [Google Scholar] [CrossRef]
  8. Hong, Y.; Tang, G.; Ma, Y.; Huang, Q.; Han, Z.; Zeng, Z.; Yang, Y.; Wang, C.; Guo, X. Remote sensing precipitation: Sensors, retrievals, validations, and applications. In Observation and Measurement of Ecohydrological Processes; Springer: Berlin/Heidelberg, Germany, 2019; pp. 107–128. [Google Scholar]
  9. Gruber, A.; Levizzani, V. Assessment of Global Precipitation Products A Project of the World Climate Research Programme Global Energy and Water Cycle Experiment (GEWEX) Radiation Panel. Available online: http://www.gewex.org/gewex-content/uploads/2016/07/2008AssessmentGlobalPrecipitationReport.pdf (accessed on 15 March 2022).
  10. Villarini, G.; Krajewski, W.F. Empirically-based modeling of spatial sampling uncertainties associated with rainfall measurements by rain gauges. Adv. Water Resour. 2008, 31, 1015–1023. [Google Scholar] [CrossRef]
  11. Castro, L.M.; Gironás, J.; Fernández, B. Spatial estimation of daily precipitation in regions with complex relief and scarce data using terrain orientation. J. Hydrol. 2014, 517, 481–492. [Google Scholar] [CrossRef]
  12. Anagnostou, E.N.; Maggioni, V.; Nikolopoulos, E.I.; Meskele, T.; Hossain, F.; Papadopoulos, A. Benchmarking high-resolution global satellite rainfall products to radar and rain-gauge rainfall estimates. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1667–1683. [Google Scholar] [CrossRef]
  13. Stampoulis, D.; Anagnostou, E.N. Evaluation of global satellite rainfall products over continental Europe. J. Hydrometeorol. 2012, 13, 588–603. [Google Scholar] [CrossRef]
  14. Hou, A.Y.; Kakar, R.K.; Neeck, S.; Azarbarzin, A.A.; Kummerow, C.D.; Kojima, M.; Oki, R.; Nakamura, K.; Iguchi, T. The global precipitation measurement mission. Bull. Am. Meteorol. Soc. 2014, 95, 701–722. [Google Scholar] [CrossRef]
  15. Kummerow, C.; Olson, W.S.; Giglio, L. A simplified scheme for obtaining precipitation and vertical hydrometeor profiles from passive microwave sensors. IEEE Trans. Geosci. Remote Sens. 1996, 34, 1213–1232. [Google Scholar] [CrossRef]
  16. Zhao, L.; Weng, F. Retrieval of ice cloud parameters using the Advanced Microwave Sounding Unit. J. Appl. Meteorol. Climatol. 2002, 41, 384–395. [Google Scholar] [CrossRef]
  17. Thies, B.; Nauß, T.; Bendix, J. Discriminating raining from non-raining cloud areas at mid-latitudes using meteosat second generation SEVIRI night-time data. Meteorol. Appl. A J. Forecast. Pract. Appl. Train. Tech. Model. 2008, 15, 219–230. [Google Scholar] [CrossRef]
  18. Thies, B.; Nauß, T.; Bendix, J. Precipitation process and rainfall intensity differentiation using Meteosat second generation spinning enhanced visible and infrared imager data. J. Geophys. Res. Atmos. 2008, 113, D23206. [Google Scholar] [CrossRef]
  19. Arkin, P.A.; Meisner, B.N. The relationship between large-scale convective rainfall and cold cloud over the western hemisphere during 1982–1984. Mon. Weather Rev. 1987, 115, 51–74. [Google Scholar] [CrossRef]
  20. Vicente, G.A.; Scofield, R.A.; Menzel, W.P. The operational GOES infrared rainfall estimation technique. Bull. Am. Meteorol. Soc. 1998, 79, 1883–1898. [Google Scholar] [CrossRef]
  21. Scofield, R.A.; Kuligowski, R.J. Status and outlook of operational satellite precipitation algorithms for extreme-precipitation events. Weather Forecast. 2003, 18, 1037–1051. [Google Scholar] [CrossRef]
  22. Ba, M.B.; Gruber, A. GOES multispectral rainfall algorithm (GMSRA). J. Appl. Meteorol. 2001, 40, 1500–1514. [Google Scholar] [CrossRef]
  23. Wu, R.; Weinman, J.A.; Chin, R.T. Determination of rainfall rates from GOES satellite images by a pattern recognition technique. J. Atmos. Ocean. Technol. 1985, 2, 314–330. [Google Scholar] [CrossRef]
  24. Griffith, C.G.; Woodley, W.L.; Grube, P.G.; Martin, D.W.; Stout, J.; Sikdar, D.N. Rain estimation from geosynchronous satellite imagery—Visible and infrared studies. Mon. Weather Rev. 1978, 106, 1153–1171. [Google Scholar] [CrossRef]
  25. Adler, R.F.; Negri, A.J. A satellite infrared technique to estimate tropical convective and stratiform rainfall. J. Appl. Meteorol. Climatol. 1988, 27, 30–51. [Google Scholar] [CrossRef]
  26. Ebert, E.E.; Manton, M.J. Performance of satellite rainfall estimation algorithms during TOGA COARE. J. Atmos. Sci. 1998, 55, 1537–1557. [Google Scholar] [CrossRef]
  27. Hsu, K.l.; Gao, X.; Sorooshian, S.; Gupta, H.V. Precipitation estimation from remotely sensed information using artificial neural networks. J. Appl. Meteorol. Climatol. 1997, 36, 1176–1190. [Google Scholar] [CrossRef]
  28. Hong, Y.; Hsu, K.L.; Sorooshian, S.; Gao, X. Precipitation estimation from remotely sensed imagery using an artificial neural network cloud classification system. J. Appl. Meteorol. 2004, 43, 1834–1853. [Google Scholar] [CrossRef] [Green Version]
  29. Behrangi, A.; Hsu, K.l.; Imam, B.; Sorooshian, S.; Huffman, G.J.; Kuligowski, R.J. PERSIANN-MSA: A precipitation estimation method from satellite-based multispectral analysis. J. Hydrometeorol. 2009, 10, 1414–1429. [Google Scholar] [CrossRef]
  30. Bellerby, T.; Todd, M.; Kniveton, D.; Kidd, C. Rainfall estimation from a combination of TRMM precipitation radar and GOES multispectral satellite imagery through the use of an artificial neural network. J. Appl. Meteorol. 2000, 39, 2115–2128. [Google Scholar] [CrossRef]
  31. Hamidi, O.; Poorolajal, J.; Sadeghifar, M.; Abbasi, H.; Maryanaji, Z.; Faridi, H.R.; Tapak, L. A comparative study of support vector machines and artificial neural networks for predicting precipitation in Iran. Theor. Appl. Climatol. 2015, 119, 723–731. [Google Scholar] [CrossRef]
  32. Meyer, H.; Kühnlein, M.; Appelhans, T.; Nauss, T. Comparison of four machine learning algorithms for their applicability in satellite-based optical rainfall retrievals. Atmos. Res. 2016, 169, 424–433. [Google Scholar] [CrossRef]
  33. Sehad, M.; Lazri, M.; Ameur, S. Novel SVM-based technique to improve rainfall estimation over the Mediterranean region (north of Algeria) using the multispectral MSG SEVIRI imagery. Adv. Space Res. 2017, 59, 1381–1394. [Google Scholar] [CrossRef]
  34. Ma, L.; Zhang, G.; Lu, E. Using the gradient boosting decision tree to improve the delineation of hourly rain areas during the summer from advanced Himawari imager data. J. Hydrometeorol. 2018, 19, 761–776. [Google Scholar] [CrossRef]
  35. Kühnlein, M.; Appelhans, T.; Thies, B.; Nauß, T. Precipitation estimates from MSG SEVIRI daytime, nighttime, and twilight data with random forests. J. Appl. Meteorol. Climatol. 2014, 53, 2457–2480. [Google Scholar] [CrossRef] [Green Version]
  36. Das, S.; Chakraborty, R.; Maitra, A. A random forest algorithm for nowcasting of intense precipitation events. Adv. Space Res. 2017, 60, 1271–1282. [Google Scholar] [CrossRef]
  37. Min, M.; Bai, C.; Guo, J.; Sun, F.; Liu, C.; Wang, F.; Xu, H.; Tang, S.; Li, B.; Di, D.; et al. Estimating summertime precipitation from Himawari-8 and global forecast system based on machine learning. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2557–2570. [Google Scholar] [CrossRef]
  38. Turini, N.; Thies, B.; Bendix, J. Estimating high spatio-temporal resolution rainfall from MSG1 and GPM IMERG based on machine learning: Case study of Iran. Remote Sens. 2019, 11, 2307. [Google Scholar] [CrossRef] [Green Version]
  39. Kolbe, C.; Thies, B.; Egli, S.; Lehnert, L.; Schulz, H.M.; Bendix, J. Precipitation retrieval over the tibetan plateau from the geostationary Orbit—Part 1: Precipitation area delineation with Elektro-L2 and Insat-3D. Remote Sens. 2019, 11, 2302. [Google Scholar] [CrossRef] [Green Version]
  40. Wang, C.; Xu, J.; Tang, G.; Yang, Y.; Hong, Y. Infrared precipitation estimation using convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8612–8625. [Google Scholar] [CrossRef]
  41. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  42. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  43. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training. Available online: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf (accessed on 15 August 2022).
  44. Dai, Z.; Yang, Z.; Yang, Y.; Carbonell, J.; Le, Q.V.; Salakhutdinov, R. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv 2019, arXiv:1901.02860. [Google Scholar]
  45. Jiao, X.; Yin, Y.; Shang, L.; Jiang, X.; Chen, X.; Li, L.; Wang, F.; Liu, Q. Tinybert: Distilling bert for natural language understanding. arXiv 2019, arXiv:1909.10351. [Google Scholar]
  46. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the Computer Vision—ECCV 2020, Glasgow, UK, 23–28 August 2020; Springer: Cham, Germany, 2020; pp. 213–229. [Google Scholar]
  47. Zhou, D.; Kang, B.; Jin, X.; Yang, L.; Lian, X.; Jiang, Z.; Hou, Q.; Feng, J. Deepvit: Towards deeper vision transformer. arXiv 2021, arXiv:2103.11886. [Google Scholar]
  48. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  49. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 ×16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  50. Tan, M.L.; Samat, N.; Chan, N.W.; Roy, R. Hydro-meteorological assessment of three GPM satellite precipitation products in the Kelantan River Basin, Malaysia. Remote Sens. 2018, 10, 1011. [Google Scholar] [CrossRef] [Green Version]
  51. Su, J.; Lü, H.; Zhu, Y.; Cui, Y.; Wang, X. Evaluating the hydrological utility of latest IMERG products over the Upper Huaihe River Basin, China. Atmos. Res. 2019, 225, 17–29. [Google Scholar] [CrossRef]
  52. Nie, Y.; Sun, J. Evaluation of high-resolution precipitation products over southwest China. J. Hydrometeorol. 2020, 21, 2691–2712. [Google Scholar] [CrossRef]
  53. Zhang, Y.; Wu, K.; Zhang, J.; Zhang, F.; Xiao, H.; Wang, F.; Zhou, J.; Song, Y.; Peng, L. Estimating Rainfall with Multi-Resource Data over East Asia Based on Machine Learning. Remote Sens. 2021, 13, 3332. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram of the study area.
Figure 1. The schematic diagram of the study area.
Atmosphere 13 02048 g001
Figure 2. The proportion of all grades in the raw data.
Figure 2. The proportion of all grades in the raw data.
Atmosphere 13 02048 g002
Figure 3. The trend of the ratio of Rain samples over time. The Rain samples rise in fluctuation from 00:00 to 11:30 and decrease in fluctuation from 12:00 to 23:30.
Figure 3. The trend of the ratio of Rain samples over time. The Rain samples rise in fluctuation from 00:00 to 11:30 and decrease in fluctuation from 12:00 to 23:30.
Atmosphere 13 02048 g003
Figure 4. The general framework of PRSOT. The Input Layer describes the variables that are specifically input to the model. The Encoding Layer is used to learn the nonlinear relationship between precipitation and input variables. The Output Layer further predicts precipitation (mm/h) according to the precipitation features extracted by the Encoding Layer. The proposed model is pixel-based, and all the pixels at the same time are assembled to form a full precipitation map, according to their geographic locations (latitude and longitude), after being given precipitation estimations.
Figure 4. The general framework of PRSOT. The Input Layer describes the variables that are specifically input to the model. The Encoding Layer is used to learn the nonlinear relationship between precipitation and input variables. The Output Layer further predicts precipitation (mm/h) according to the precipitation features extracted by the Encoding Layer. The proposed model is pixel-based, and all the pixels at the same time are assembled to form a full precipitation map, according to their geographic locations (latitude and longitude), after being given precipitation estimations.
Atmosphere 13 02048 g004
Figure 5. The MSE loss variation curves of PRSOT_Area_based model and PRSOT_Pixel_based model in the training period.
Figure 5. The MSE loss variation curves of PRSOT_Area_based model and PRSOT_Pixel_based model in the training period.
Atmosphere 13 02048 g005
Figure 6. The Confusion Matrix metrics for PRSOT_Area_based model (a) and PRSOT_Pixel_based model (b).
Figure 6. The Confusion Matrix metrics for PRSOT_Area_based model (a) and PRSOT_Pixel_based model (b).
Atmosphere 13 02048 g006
Figure 7. Comparisons between GPM, Random Forests, PRSOT_Area_based model, and PRSOT_Pixel_based model at 0500 UTC 01 (a1a4), 1530 UTC 09 (b1b4), 0930 UTC 21 (c1c4) and 0430 UTC 25 (d1d4) August 2018. The red boxes indicate regions with significant contrast.
Figure 7. Comparisons between GPM, Random Forests, PRSOT_Area_based model, and PRSOT_Pixel_based model at 0500 UTC 01 (a1a4), 1530 UTC 09 (b1b4), 0930 UTC 21 (c1c4) and 0430 UTC 25 (d1d4) August 2018. The red boxes indicate regions with significant contrast.
Atmosphere 13 02048 g007
Figure 8. The maximum (a) and minimum (b) value distributions of TBB from band 8 after Area-based normalization.
Figure 8. The maximum (a) and minimum (b) value distributions of TBB from band 8 after Area-based normalization.
Atmosphere 13 02048 g008
Figure 9. Comparisons between Area-based normalization (ac) and Pixel-based normalization (df) at 0930 UTC 21 August 2018.
Figure 9. Comparisons between Area-based normalization (ac) and Pixel-based normalization (df) at 0930 UTC 21 August 2018.
Atmosphere 13 02048 g009
Figure 10. Spatial distribution of correlation coefficients for PRSOT_Area_based (a) and PRSOT_Pixel_based model (b) in testing dataset during August 2018. The letters A, B, C, D, and E represent the major subregions with correlation coefficients greater than 0.7.
Figure 10. Spatial distribution of correlation coefficients for PRSOT_Area_based (a) and PRSOT_Pixel_based model (b) in testing dataset during August 2018. The letters A, B, C, D, and E represent the major subregions with correlation coefficients greater than 0.7.
Atmosphere 13 02048 g010
Figure 11. The diurnal variation of ACC for the precipitation grades’ estimation in testing dataset during August 2018. The red dotted line represents the average ACC, 0.49 for PRSOT_Area_based model (a) and 0.69 for PRSOT_Pixel_based model (b). Box diagrams indicate the 25th, 50th, and 75th percentiles, whereas the periphery of the box extends to 1.5-times the quartile deviation (25th–75th percentile). Outliers are indicated by black circles.
Figure 11. The diurnal variation of ACC for the precipitation grades’ estimation in testing dataset during August 2018. The red dotted line represents the average ACC, 0.49 for PRSOT_Area_based model (a) and 0.69 for PRSOT_Pixel_based model (b). Box diagrams indicate the 25th, 50th, and 75th percentiles, whereas the periphery of the box extends to 1.5-times the quartile deviation (25th–75th percentile). Outliers are indicated by black circles.
Atmosphere 13 02048 g011
Table 1. Classification of precipitation grades.
Table 1. Classification of precipitation grades.
ClassGradesRange
1No Rain<0.1 mm/h
2Light Rain0.1 mm/h–1.5 mm/h
3Moderate Rain1.6 mm/h–6.9 mm/h
4Heavy Rain≥7.0 mm/h
Table 2. Evaluation metrics for PRSOT_Area_based model, PRSOT_Pixel_based model and Random Forests.
Table 2. Evaluation metrics for PRSOT_Area_based model, PRSOT_Pixel_based model and Random Forests.
MetricsPRSOT_Area_BasedPRSOT_Pixel_BasedRandom Forests
POD0.850.740.97
CSI0.430.470.30
FAR0.540.440.69
ACC0.680.750.43
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jia, Z.; Yang, S.; Zhang, J.; Zhang, Y.; Yang, Z.; Xue, K.; Bai, C. PRSOT: Precipitation Retrieval from Satellite Observations Based on Transformer. Atmosphere 2022, 13, 2048. https://doi.org/10.3390/atmos13122048

AMA Style

Jia Z, Yang S, Zhang J, Zhang Y, Yang Z, Xue K, Bai C. PRSOT: Precipitation Retrieval from Satellite Observations Based on Transformer. Atmosphere. 2022; 13(12):2048. https://doi.org/10.3390/atmos13122048

Chicago/Turabian Style

Jia, Zhaoying, Shengpeng Yang, Jinglin Zhang, Yushan Zhang, Zhipeng Yang, Ke Xue, and Cong Bai. 2022. "PRSOT: Precipitation Retrieval from Satellite Observations Based on Transformer" Atmosphere 13, no. 12: 2048. https://doi.org/10.3390/atmos13122048

APA Style

Jia, Z., Yang, S., Zhang, J., Zhang, Y., Yang, Z., Xue, K., & Bai, C. (2022). PRSOT: Precipitation Retrieval from Satellite Observations Based on Transformer. Atmosphere, 13(12), 2048. https://doi.org/10.3390/atmos13122048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop