Next Article in Journal
Numerical Simulation of a Heavy Precipitation Event in the Vicinity of Madrid-Barajas International Airport: Sensitivity to Initial Conditions, Domain Resolution, and Microphysics Parameterizations
Next Article in Special Issue
Potential Impacts of Land Cover Change on the Interannual Variability of Rainfall and Surface Temperature over West Africa
Previous Article in Journal
Evaluation of NESMv3 and CMIP5 Models’ Performance on Simulation of Asian-Australian Monsoon
Previous Article in Special Issue
Long-Term Rainfall Trends over the Tanzania Coast
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Quantitative Rainfall Prediction Using Ensemble Analogues in the Tropics: Case Study of Uganda

by
Isaac Mugume
1,*,
Michel D. S. Mesquita
2,3,†,
Yazidhi Bamutaze
1,†,
Didier Ntwali
4,†,
Charles Basalirwa
1,†,
Daniel Waiswa
1,†,
Joachim Reuder
5,†,
Revocatus Twinomuhangi
1,†,
Fredrick Tumwine
1,†,
Triphonia Jakob Ngailo
6 and
Bob Alex Ogwang
7
1
Department of Geography, Geoinformatics and Climatic Sciences, Makerere University, P. O. Box 7062, Kampala, Uganda
2
Future Solutions, Håvikbrekka 92, 5440 Mosterhamn, Norway
3
Uni Research Climate, Bjerknes Centre for Climate Research, Jahnebakken 5, Bergen 5007, Norway
4
Institute of Atmospheric Physics, Laboratory for Middle Atmosphere and Global Environmental Observation, University of Chinese Academy of Sciences, Beijing 100029, China
5
Geophysical Institute, University of Bergen, Allegaten 70, 5007 Bergen, Norway
6
Department of General Studies, Dar es Salaam Institute of Technology, P. O. Box 2958, Dar–es–Salaam, Tanzania
7
Uganda National Meteorological Authority, P. O. Box 7025, Kampala, Uganda
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Atmosphere 2018, 9(9), 328; https://doi.org/10.3390/atmos9090328
Submission received: 25 December 2017 / Revised: 26 March 2018 / Accepted: 7 April 2018 / Published: 22 August 2018
(This article belongs to the Special Issue Precipitation Variability and Change in Africa)

Abstract

:
Accurate and timely rainfall prediction enhances productivity and can aid proper planning in sectors such as agriculture, health, transport and water resources. However quantitative rainfall prediction is normally a challenge and for this reason, this study was conducted with an aim of improving rainfall prediction using ensemble methods. It first assessed the performance of six convective schemes (Kain–Fritsch (KF); Betts–Miller–Janjić (BMJ); Grell–Fretas (GF); Grell 3D ensemble (G3); New–Tiedke (NT) and Grell–Devenyi (GD)) using the root mean square error (RMSE) and mean error (ME) focusing on the March–May 2013 rainfall period over Uganda. 18 ensemble members were then generated from the three best performing convective schemes (i.e., KF, GF and G3). The daily rainfall predicted by the three ensemble methods (i.e., ensemble mean (ENS); ensemble mean analogue (EMA) and multi–member analogue ensemble (MAEM)) was then compared with the observed daily rainfall and the RMSE and ME computed. The results shows that the ENS presented a smaller RMSE compared to individual schemes (ENS: 10.02; KF: 23.96; BMJ: 26.04; GF: 25.85; G3: 24.07; NT: 29.13 and GD: 26.27) and a better bias (ENS: −1.28; KF: −1.62; BMJ: −4.04; GF: −3.90; G3: −3.62; NT: −5.41 and GD: −4.07). The EMA and MAEM presented 13 out of 21 stations and 17 out of 21 stations respectively with smaller RMSE compared to ENS thus demonstrating additional improvement in predictive performance. This study proposed and described MAEM and found it producing comparatively better quantitative rainfall prediction performance compared to the other ensemble methods used. The MAEM method should be valid regardless the nature of the rainfall season.

1. Introduction

Rainfall is a key climatic element that has consequences on key production sectors including agriculture [1,2], health [3], electricity generation [4] and water resources [5,6] among others. Over Eastern Africa, the rainfall distribution and quantity is influenced by many factors such as the Inter–Tropical Convergence Zone, El Niño/La Niña episodes, Indian Ocean Dipole and extra-tropical weather systems [7,8]. The spatial and temporal variability of rainfall makes its quantitative prediction a challenge [8,9]. However, according to He et al. [5] and Jie et al. [10], rainfall can be predicted quantitatively up to 7 days using models but the prediction accuracy degrades with increasing lead time [11].
Some of the scientific ways of predicting rainfall quantitatively have been suggested such as: the use of radar [8] and the use of Numerical Weather Prediction (NWP) models [6]. The use of radar is considered superior at short–range forecasts due to better spatial representation and assimilation of the initial rainfall estimates. Unfortunately the radar–based method’s accuracy declines with increasing lead time due to poor resolving of the growth and decay of rainfall for long lead times [5]. The NWP models usually have higher skill because they can represent the dynamics and physics influencing atmospheric processes [6]. However the limitation of NWP models arise due model’s failure to resolve sub–grid processes but it is addressed using parameterization schemes [12]. An additional scientific method is the use of statistical models such as regression [9].
The NWP models normally face a challenge in quantitative rainfall prediction due to uncertainty in initial conditions [6,12] and model deficiencies [11,13] among others. This challenge can result to orographic bias [13] manifesting as enhanced rainfall over mountainous regions. Therefore, more than one deterministic NWP models can be run to generate an ensemble and thus improve quantitative precipitation forecasts (QPF) which also helps to quantify uncertainty. He et al. [5] noted that ensemble QPF normally produces a higher skill in terms of quantity and occurrence time than the individual ensemble members.
According to the European Center for Medium–Range Weather Forecasts (ECMWF [14]), which runs 51—ensemble members, an ensemble prediction consists of simultaneously running multiple forecasts (i.e. ‘ensemble members’) with varying initial conditions and slightly perturbed physical conditions to represent uncertainty in initial conditions and also produce a range of possible weather. It is different from the model output statistics introduced by Glahn and Lowry [15] which is a method of determining the statistical relationship between the predictand and the variables predicted by a numerical model [16]. The output from the ensemble members are then statistically post–processed to obtain a skillful probabilistic forecast [1,17] which addresses the uncertainty inherent in the initial conditions and associated model imperfections faced by a deterministic NWP model [18,19]. This is because the ensemble spread gives the measure of the uncertainty of the prediction [10,20]. However, quantifying absolute uncertainty presents additional challenge due to the bias inherent in the models used [20].
The ensemble mean normally has a smaller error compared to the mean error of individual deterministic NWP model forecasts [21,22] and is often more skillful compared to climatology forecasts [23]. For this reason, the ensemble mean is the most widely used tool and is normally used as a deterministic forecast [21]. However studies such as He et al. [5], Coiffier [11], Evans et al. [24] and many others, suggest that much of the results given by the ensemble mean could be obtained using a single deterministic forecast and then improved by statistically correcting it. The major limitation of using this approach, is that, there could be a chance that the statistical formulation process may not accurately represent extreme weather. A wide array of ensemble members giving an adequate ensemble spread was found skillful for short and medium term range prediction (i.e., less than 7 days) and also to assist in dividing ensemble rainfall data into sub–samples by Zhu et al. [25].
The ensemble members can be obtained in many ways, such as running NWP models with varying initial conditions [22], perturbing the physical parameterization schemes of the model [14,26], initializing the models at the different times (time–lagged) [10], combining output from different NWP models (multi–model ensemble) [10,27]. However, it is important to integrate multiple methods of generating ensemble members because, for example, perturbing initial conditions or model physics has been found to be under-dispersive [22,25].
Although ensemble rainfall prediction has been extensively covered by previous studies e.g., He et al. [5], Fritsch et al. [27], Redmond et al. [26] and many others, majority of these studies employed ensemble mean for quantitative rainfall prediction. A few studies e.g., Hamill and Whitaker [28] have considered analogues in ensemble precipitation prediction and found this method to improve the Brier Skills Score but they also recommended that this approach cannot be global instead it is appropriate for use regionally. Additionally, Vanvyve et al. [29] have used ensemble analogues as ensemble mean analogue (EMA) in wind speed forecasting. This study employs the EMA in addition to the ensemble mean and proposed another ensemble analogue method, the ‘multi–member analogue ensemble’. It assesses the performance of these ensemble methods on an 18–member ensemble derived from the 3 best performing cumulus schemes using the Weather Research and Forecasting (WRF) model. The ensemble members are generated from combining the convection schemes, varying entrainment rate by ±25 and time–lagging.

2. Data and Methods

2.1. Data Sources

The study used daily rain gauge rainfall data from 1st March to 31st May 2013 recorded at 21 weather stations of Uganda (Figure 1). The rainfall data were obtained from the Uganda National Meteorological Authority (UNMA) and quality controlled by checking for completeness. Like many climatic records of developing countries, missing data were found at Entebbe, Gulu, Jinja, Serere and Tororo. The missing gaps at these stations were filled using the normal ratio method described by Mugume et al. [8] and were not excluded from analysis since the data gaps were less than 5%, a threshold recommended by the World Meteorology Organization.
The daily rainfall data were then compared to the daily ensemble generated rainfall data over the same period and this was used to examine the performance of the ensemble methods. The input lateral boundary condition data to initialize the deterministic NWP simulations were obtained from the National Centers for Environmental Prediction (NCEP) and National Center for Atmospheric Research (NCAR) Reanalysis [30] at a resolution of 1 × 1 , covering the period of study.

2.2. The Study Area

The study was conducted in Uganda which geographically approximately extends from 29.55 E to 35.09 E and 1.50 S to 4.36 N . The country largely experiences a tropical savanna climate, with isolated cases like the Lake Victoria Basin experiencing an equatorial climate [31]. Additional climatological zoning has been carried out by Basalirwa [32] who delineated the country into homogeneous zones. The 21 stations used in the study are representative of the major political regions e.g., the northern region (Arua, Gulu, Lira, and Kitgum); the eastern region (Serere, Soroti, Buginyanya and Tororo); the western region (Masindi, Kasese, Bushenyi, Mbarara and Kabale); the cattle corridor (is a diagonal stretch from south western Uganda to north eastern Uganda, has Ntusi) and the Lake Victoria Basin (Entebbe, Makerere, Kituza, Namulonge, Jinja, Kamenyamigo and Kibanda).
The rainfall over Uganda exhibits large spatial and temporal variations with the MAM rainfall period starting in March in most areas except the northern region where the rains normally start in April [2]. With exception of the northern region, which experience a unimodal rainfall distribution, Uganda generally experiences two rainfall seasons (i.e., March–May and September–November). The March–May (MAM) seasonal rainfall over Uganda is generally influenced by the Inter–Tropical Convergence Zone [2,8]; the monsoon winds of East Africa [33,34]; the Indian ocean dipole [2,7]; the humid Congo airmass [6]; the tropical cyclones, semi–permanent subtropical anticyclones and easterly waves [7,8]; the complex topography [34], vegetation and inland water bodies which modulate local rainfall [8,35]. The Inter–Tropical Convergence Zone migrates north and southwards over the equator two times a year, which brings two rainfall seasons in the region i.e., the March–May and September–November seasons.

2.3. Experimental Design

The ensemble members were generated by (i) initializing the NWP simulations at different times (i.e., time–lagged ensemble members); (ii) varying the cumulus parameterization schemes; (iii) perturbing the cumulus parameterization schemes; and (iv) combination of (i) to (iii). We first assessed the performance of six cumulus parameterization schemes (i.e., Kain–Fritsch (KF); Betts–Miller–Janjić (BMJ); Grell–Fretas (GF); Grell 3D ensemble (G3); New–Tiedke (NT) and Grell–Devenyi (GD)) using the root mean square error and the mean error (Section 2.4.1), where we compared the daily simulated rainfall with daily observed rainfall. The three poor performers were eliminated in line with Evans et al. [24] and we carried out the analysis with the three best schemes (i.e., the KF, GF and G3) from which we generated the ensemble members. The description of the cumulus parameterization schemes and their strong influence in convective precipitation simulation is presented by Mayor and Mesquita [12].
The time–lagged ensemble members were derived by initializing the WRF model simulations at 0000UTC, 0600UTC, 1200UTC and 1800UTC. This type of 6–hour time–lag was also used by Jie et al. [10] for a 6–15 day ensemble probabilistic summer precipitation forecast over China. Perturbing the schemes was carried out by varying the entrainment rates (i.e., ± 25 % of the default rate). The study further combined the different physics parameterization schemes (multi–physics ensemble) but mainly varied the cumulus parameterization schemes because of their significant effect on precipitation simulation [11,12] especially convective precipitation in the tropics and the study region, Uganda is found in the tropics. All simulations were done using the same initial conditions provided by NCEP/NCAR (Section 2.1) and we used a total of 18 ensemble members.
Additionally, the study used three domains as shown in Figure 2 and the domains were designed such that:
  • the first domain at a horizontal resolution of 90 km. This domain covered Africa and was deemed sufficiently enough to cover the large scale synoptic systems such as the sub–tropical high pressure systems which are important for rainfall over equatorial region;
  • the second domain at a horizontal resolution of 30 km covering most parts of equatorial region to cater for the influx of moisture over Uganda especially the Congo air mass and the moist currents from Mozambique channel;
  • the third domain at a horizontal resolution of 10 km covering Uganda, the study region. This domain was considered appropriate to resolve the local physical features like orography and the in–land water bodies.
The simulations were carried out for the period 1st March–31st May 2013 running daily with a respective 12 h spin–up period to reduce spin–up errors. All the simulations were carried out using WRF version 3.8 model and the experiments employed the staggered Arakawa C–grid; 30 vertical layers with model top fixed at 50 hPa; the terrain–following mass coordinate as vertical coordinate that had the capability of allowing variation of vertical grid–spacing and the Runge–Kutta 2nd order integration option. The other physical schemes used are: the WRF Single–Moment 3–class microphyiscal scheme; the Rapid Radiative Transfer Model as the longwave radiation scheme; the Dudhia scheme as the shortwave radiation scheme; the Noah Land Surface Model and the Yonsei University scheme as the planetary boundary layer scheme.

2.4. Methods

2.4.1. Performance Analysis Methods

This study employed the root mean square error, RMSE (Equation (1)) and the mean error, ME (Equation (2)) to compute the performance of the ensemble methods. The study also used the student’s t–test to compare the performance of ensemble methods with that of the convective parameterization schemes.
The RMSE is calculated as the square root of the arithmetic average of the daily square differences between predicted (i.e., P i ) and observed (i.e., O i ) when paired. It is calculated mathematically using Equation (1).
R M S E = 1 n i = 1 n P i O i 2
The ME is the arithmetic average of the paired differences (i.e., P i O i ) and is obtained mathematically using Equation (2)
M E = 1 n i = 1 n P i O i
where i is the ith data point ordered in time and for this study, it is on daily basis (i.e., every 24 h).
The student’s t–test is a parametric test used to assess the difference between the means of two univariate random variables [36]. The null hypothesis adopted in the study was that there is “no difference between the RMSE of the ensemble method and that of the convective parameterization scheme at 99% confidence interval”. If the RMSE (or ME) sample mean of the ensemble method and that of the convective parameterization are x ¯ and y ¯ respectively, the Student’s t–test is defined by:
t = x ¯ y ¯ s 12 1 n 1 1 n 2
where n 1 and n 2 are the sample size of the RMSE (or ME) of the ensemble members and the convective parameterization schemes respectively; s 12 is the pooled standard deviation computed using:
s 12 2 = i = 1 n 1 ( x i x ¯ ) + i = 1 n 2 ( y i y ¯ ) n 1 + n 2 2
where x and y are the variables (i.e., the RMSE and ME) of the ensemble method and the convective parameterization schemes respectively.

2.4.2. The Ensemble Mean

The ensemble mean is the arithmetic average of the ensemble members. It was found to outperforms climatological forecasts by Segele et al. [23] and is the most widely used ensemble method [21]. The description of the ensemble mean as used in this study follows.
If we have n ensemble members such that,
ϕ 1 , ϕ 2 , , ϕ n 1 , ϕ n
and for each member, i we get the prediction distribution over a total number of m events,
a 1 i , a 2 i , , a ( m 1 ) i , a m i
which we summarize as
a 11 a 21 . . . a m 1 a 12 a 22 . . . a m 2 . . . . . . . . . . . . . . . . . . a 1 n a 2 n . . . a m n
where the rows represent the different ensemble members and the columns are the prediction events. The ensemble mean, a ¯ k is thus the arithmetic average of the individual columns and in this study, a ¯ k was compared with the observed rainfall event, O k and the RMSE (Equation (1)) as well as the ME (Equation (2)) computed.

2.4.3. The Ensemble Mean Analogue

This Ensemble Mean Analogue (EMA) method was proposed by Delle–Monache et al. [37] and later used by Vanvyve et al. [29] and Horvath et al. [38] in simulating the wind energy for wind farm projects. They argued that it is key in assessing wind farm projects due to its ability to provide associated uncertainty. The use of EMA in precipitation prediction is presented by Hamill and Whitaker [28] who found it to improve the prediction skill, especially the Brier Skills Score.
The EMA method involves first obtaining the ensemble mean, a ¯ k . We then match the present ensemble mean with historical ensemble mean(s). The observation(s) (i.e., { O T : O 1 T , O 2 T , , } ) that correspond to the historical ensemble mean(s) are now used as the forecast value. To illustrate this, suppose the present calculated ensemble mean is 6.0 mm, we look in historical ensemble means and look for a similar ensemble mean of 6.0 mm. Suppose on that day, the observed rainfall became 0.8 mm, then our ensemble mean analogue becomes 0.8 mm.
If a corresponding analogue cannot be found, then a ¯ k is used and the case is considered a rare event but if more than one analogue is found, then an arithmetic mean of analogue is used. In the study, the analogues were obtained from prediction of the MAM 2006 rainfall season. This period is considered an appropriate analogue season because it had the same Oceanic Niño Index of −0.2 [39,40] with MAM 2013 and the Inter–tropical Convergence Zone, one of the major rainfall drivers over the study area, is normally over the same region.

2.4.4. The Multi–Member Analogue Ensemble

In order to improve ensemble rainfall prediction, the multi–member analog ensemble method (MAEM) was used and is described here.
According to the MAEM, for each ensemble member, its corresponding analogue is used to give the new ensemble member. The description of this method follows that: for a given prediction ϕ i making part of the n member ensemble, i.e.,
ϕ 1 , ϕ 2 , , ϕ i , , ϕ n 1 , ϕ n
we look for its analogue and consider the observed value corresponding to ϕ i . This observed value, Φ i becomes the new ensemble member thus giving a new ensemble, i.e.
Φ 1 , Φ 2 , , Φ i , , Φ n 1 , Φ n
if more than one analogue is found, an arithmetic average of the analogues is used, i.e.,
Φ ¯ i = 1 M j = 1 M Φ i j
where M is the number of obtained analogues. If no corresponding analogue is found, model prediction is used as it could probably indicate a rare event. The prediction now becomes the arithmetic mean, Φ ¯ k of the observed analogues, Φ k , i.e.,
Φ ¯ k = 1 n i = 1 n Φ i k
which was then compared with the rainfall event, t k .

2.4.5. Interpolation Method

This study adopted the inverse distance weighting, IDW interpolation method for displaying spatially, the performance of ensemble methods and the simulations of the convective schemes. The IDW is used in many studies that present spatially varying patterns e.g., Amiri and Mesgari [41] who used the IDW in modeling the spatial and temporal patterns of precipitation in Northwestern Iran. According to Franke [42], the IDW is generalized by Equation (7).
F ( x , y ) = k = 1 n w k ( x , y ) f k k = 1 n w k ( x , y )
where f k is known as a nodal function and w k ( x , y ) is a weighting function given by
w k ( x , y ) = ( x x k ) 2 + ( y y k ) 2

3. Results and Discussion

3.1. Overview of the MAM Seasonal Rainfall Totals over Uganda

The total rainfall obtained at the stations used in the study is illustrated in Figure 1. Figure 1 is obtained by spatially interpolating total rainfall using the IDW method as recorded at a given station over the MAM 2013 rainfall season. The total MAM 2013 rainfall amount over the entire study area was in the range of 200–900 mm.
The stations in northern Uganda received comparatively lower seasonal rainfall amounts of 200–500 mm. This was expected because this region normally receives a unimodal rainfall distribution with rainfall onset around April/May peaking around July/August [2]. Over western Uganda, the rainfall amount was in the range of 270–550 mm while rainfall over Eastern Uganda varied from 400 mm to 900 mm. The Lake Victoria Basin received rainfall in the range of 400–650 mm. The MAM 2013 seasonal rainfall over Uganda thus exhibited large spatial variations.

3.2. Performance of the Cumulus Schemes

The performance of the cumulus schemes is presented using Table 1 and Table 2. Table 1 shows the RMSE of the simulations with the different cumulus schemes along with rankings in performance while Table 2 shows the ME of the simulations with the different cumulus schemes and their respective ranking based on the ME. The schemes yielded varied results for different regions in Uganda as shown by Figure 3. For example, the KF scheme over estimated rainfall over northern Uganda (i.e., 300–750 mm) but also presented comparable rainfall amounts over Eastern Uganda (i.e., 400–900 mm). The BMJ, the GD and the G3 underestimated rainfall amount over most parts of the country especially the eastern region (i.e., 300–600 mm). The GF captured rainfall amount over the northern region (i.e., 250–450 mm) but also underestimated rainfall over the Lake Victoria Basin (i.e., 150–300 mm) while the NT generally underestimated rainfall amount over most areas of Uganda.
Other studies such as Mayor and Mesquita [12] found the GF scheme to represent spatial precipitation distribution and the BMJ presenting higher precipitation amount. Related studies by Ratna et al. [43] found the GD scheme over–estimating convective rainfall. Our findings show that the KF, GF and G3 had comparatively smaller RMSE and better rankings based on RMSE (Table 1) and better scores of ME (Table 2). These schemes were thus selected to generate additional ensemble members.

3.3. The Performance of Ensemble Mean

The comparison of the spatially interpolated RMSE of the ensemble mean using IDW to that of the individual cumulus parameterization schemes is presented in Figure 4d while comparison of the spatially interpolated ME of the ensemble mean to that of the individual cumulus parameterization schemes presented in Figure 5d.
The RMSE for ensemble mean is presented in Table 3, 2nd column while the results for bias (or ME) presented in Table 4, 2nd column. This study is at 10 Km horizontal resolution, slightly higher than the 18 Km resolution of the ensemble of European Center for Medium–range Weather Forecasts (ECMWF) which could interest ECMWF to investigate potential improvement in its prediction skill using higher resolution.
The performance of the three best cumulus schemes (i.e., KF, GF and G3) and the ensemble mean (ENS) across the regions in the country is illustrated in Figure 4. Spatial analysis of Figure 4 shows that, the RMSE of the ensemble mean was slightly higher for the western and the eastern regions of Uganda (i.e., RMSE: 9–15 and 9–16) respectively. The ‘cattle corridor’ of Uganda and the Lake Victoria Basin had RMSE of 8–10 while the northern region had 8–11.
The ensemble mean RMSE results are comparatively smaller than the RMSE results of individual cumulus parameterization schemes over most of the study areas. Additional statistical analysis using the Student’s t–test at 99% confidence level showed a significant difference in the RMSE of the ensemble mean and that of the individual convective parameterization schemes (i.e., KF: t = 4.73 and p < 0.001 ; GF: t = 5.14 and p < 0.001 and G3: t = 5.41 and p < 0.001 ). Since examination of the RMSE presented on individual study location by the ensemble mean (Table 3) and the respective convective parameterization schemes (Table 1) showed that the ensemble mean presented smaller magnitudes of RMSE for most of the study locations, it shows that ensemble mean presents a better prediction performance compared to individual convective parameterization schemes. These results confirm the findings of Segele et al. [23] and He et al. [5] among others that found the ensemble mean more skillful compared to the simulations from individual ensemble members.
The ensemble mean did not show a significant improvement in the ME because, apart from the northern Uganda, we still observed a negative bias over most parts of the country as shown by Figure 5a–c. So relying on the RMSE, we can argue that there is improvement in performance when using the ensemble mean but not necessarily changes in bias.

3.4. The Performance of Ensemble Mean Analogue

The RMSE results on the performance of the ensemble mean analogue (EMA) are presented in Table 3, 3rd column while the ME results are presented in Table 4, 3rd column. The spatial analysis of RMSE and ME results of the ensemble mean analogue are presented using Figure 4e and Figure 5e respectively.
The results show that although the improvement in simulation as shown by reduction in RMSE was not significant at 99%, 13 out of 21 stations (RMSE shown in italics in Table 3, the 3rd column) had their RMSE less than RMSE of ensemble mean. A slight improvement in negative bias is observed (t = 1.710; p_value = 0.096) and additional analyses shows that the magnitude of bias of 16 out of 21 stations (Table 4 with bold values, the 3rd column) was less than a magnitude of 2.00.
Since the results of ensemble mean (Section 3.3) showed that it is significantly better than individual convective parameterization schemes, this slight improvement in performance presented by the ensemble mean analogue indicates that the ensemble mean analogue is also significantly better than simulations from individual convective parameterization schemes. Thus, the results of RMSE and ME confirm that the EMA can present a modest improvement in quantitative rainfall prediction and are in agreement with the results presented by Hamill and Whitaker [28] who found it to improve the precipitation prediction skill.

3.5. The Performance of Multi–Member Analogue Ensemble Method

The RMSE results for the multi–member analogue ensemble method are also presented in Table 3, the 4th column while the ME results are presented in Table 4, the 4th column. The spatial analysis of RMSE and ME results of the multi–member analogue ensemble method are presented using spatially interpolated Figure 4f and Figure 5f respectively.
We again observed that although the improvement in simulation as shown by reduction in RMSE was not significant at 99%, 17 out of 21 stations (RMSE shown in italics in Table 3, the 4th column) had their RMSE less than RMSE of ensemble mean. We further noted a slight improvement in RMSE, though not significant at 99% by employing multi–member analogue ensemble compared to ensemble mean analogue with 13 out of 21 stations (i.e., bold values in Table 3, the 4th column) having a smaller RMSE compared to the RMSE for ensemble mean analogue.
The results further show a significant improvement in negative bias at 99% confidence level (t = 2.5285; p_value = 0.016) compared to the ME of the ensemble mean. Additional analysis shows that, the magnitude of ME of 18 out of 21 stations (Table 4 with bold values, the 4th column) was less than a magnitude of 2.00. The results of RMSE and ME confirm that the multi–member analogue ensemble method can improve rainfall prediction. Since these analyses have been conducted for individual rainfall events over the MAM 2013 season, we expect this approach to be valid regardless of whether the season is normal, above–normal or below–normal.

4. Summary and Conclusions

The study investigated the potential for improving quantitative rainfall prediction using ensemble methods (ensemble mean, ENS; ensemble mean analogue, EMA and multi–member analogue ensemble mean, MAEM). It considered 18 ensemble members which were generated from time–lagging by 6–hours (00UTC, 06UTC, 12UTC and 18UTC); perturbing the convective parameterization schemes by varying the entrainment rate by ± 25 % and combining the simulations from the physical parameterization schemes used (i.e., the multi–physics ensemble). The study first analyzed the spatial distribution of the MAM 2013 rainfall and found that it was in the range of 200–900 mm. It was also noted that the MAM 2013 rainfall exhibited large spatial variations over the study region.
The study then assessed the performance of six cumulus parameterization schemes (KF, BMJ, GF, G3, NT and GD) and found varying performance over different regions of Uganda with the KF scheme over estimating rainfall over northern Uganda; the BMJ, the GD and the G3 underestimating rainfall amount over most parts of the country especially the eastern region; the GF capturing rainfall amount over the northern region while the NT generally underestimating rainfall amount over most areas. Thus these schemes presented varying regional biases in their rainfall prediction performance and no one scheme presented a consistent performance over all the study locations. Overall, the KF, the G3 and the GF schemes presented comparatively better performance and were used to generate the ensemble members used in the study.
The study further assessed the performance of ensemble mean, ensemble mean analogue and the multi–member analogue ensemble. An improvement in the RMSE was observed while using the ensemble mean compared to using individual cumulus parametrization schemes. However, there was a non significant change in the ME of the ensemble mean compared to individual parameterization schemes and this could probably be as a result of cancellation of the positive and negative biases. The ensemble mean analogue presented a reduction in the magnitude of the RMSE and a slight improvement in the ME where 16 out of 21 stations had their magnitude of ME less than 2.00. The multi–member analogue ensemble presented a further reduction in the RMSE and additional improvement in the magnitude of ME. We thus note that although the ensemble mean improves the prediction accuracy, the ensemble mean analogue and the multi–member analogue ensemble presents additional improvement in the quantitative precipitation prediction accuracy with multi–member analogue ensemble mean giving the best results. We also argue that this method, the “multi–member analogue ensemble mean” can be applied to precipitation prediction during any rainy season and not affected by the nature of the season since it was based on the prediction of daily rainfall and not seasonal rainfall.

Author Contributions

Isaac Mugume, Triphonia Jakob Ngailo, Bob Alex Ogwang and Didier Ntwali conceived, designed and carried out the study; Michel d. S. Mesquita, Yazidhi Bamutaze, Revocatus Twinomuhangi, Fredrick Tumwine discussed the results, Charles Basalirwa, Daniel Waiswa and Joachim Reuder supervised the study.

Acknowledgments

The authors are grateful to the project of “Partnership for Building Resilient Ecosystems and Livelihoods to Climate Change and Disaster Risks” (BREAD project: SIDA/331) under the “Swedish International Development cooperation Agency” (SIDA) for the financial support and the project, “Improving Weather Information Management in East Africa for effective service provision through the application of suitable ICTs” (WIMEA-ICT project: UGA–13/0018) under the Norad’s Programme for Capacity Development in Higher Education and Research for Development (NORHED) for the technical assistance, and to the reviewers for the constructive feedback that greatly improved this manuscript. We are also grateful to Uganda National Meteorological Authority (https://www.unma.go.ug) for availing the rainfall data used in the study and to NCEP/NCAR, for the lateral boundary conditions data for initializing the simulations.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tao, S.; Shen, S.; Li, Y.; Wang, Q.; Gao, P.; Mugume, I. Projected crop production under regional climate change using scenario data and modeling: Sensitivity to chosen sowing date and cultivar. Sustainability 2016, 8, 214. [Google Scholar] [CrossRef]
  2. Ogwang, B.; Chen, H.; Li, X.; Gao, C. The influence of topography on east African October to December climate: Sensitivity experiments with REGCM4. Adv. Meteorol. 2014, 2014, 143917. [Google Scholar] [CrossRef]
  3. Karuri, S.W.; Snow, R.W. Forecasting paediatric malaria admissions on the Kenya Coast using rainfall. Glob. Health Action 2016, 9, 29876. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Kabo-Bah, A.T.; Diji, C.J.; Nokoe, K.; Mulugetta, Y.; Obeng-Ofori, D.; Akpoti, K. Multiyear rainfall and temperature trends in the Volta river basin and their potential impact on hydropower generation in Ghana. Climate 2016, 4, 49. [Google Scholar] [CrossRef]
  5. He, S.; Raghavan, S.; Nguyen, N.; Liong, S.-Y. Ensemble rainfall forecasting with numerical weather prediction and radar-based nowcasting models. Hydrol. Process. 2013, 27, 1560–1571. [Google Scholar] [CrossRef]
  6. Ntwali, D.; Ogwang, B.; Ongoma, V. The impacts of topography on spatial and temporal rainfall distribution over Rwanda based on WRF model. Atmos. Clim. Sci. 2016, 6, 145–157. [Google Scholar] [CrossRef]
  7. Awange, J.; Anyah, R.; Agola, N.; Forootan, E.; Omondi, P. Potential impacts of climate and environmental change on the stored water of Lake Victoria Basin and economic implications. Water Resour. Res. 2013, 49, 8160–8173. [Google Scholar] [CrossRef] [Green Version]
  8. Mugume, I.; Mesquita, M.; Basalirwa, C.; Bamutaze, Y.; Reuder, J.; Nimusiima, A.; Waiswa, D.; Mujuni, G.; Tao, S.; Jacob Ngailo, T. Patterns of dekadal rainfall variation over a selected region in lake victoria basin, Uganda. Atmosphere 2016, 7, 150. [Google Scholar] [CrossRef]
  9. Ngailo, T.; Shaban, N.; Reuder, J.; Rutalebwa, E.; Mugume, I. Non homogeneous poisson process modelling of seasonal extreme rainfall events in Tanzania. Int. J. Sci. Res. 2016, 5, 1858–1868. [Google Scholar]
  10. Jie, W.; Wu, T.; Wang, J.; Li, W.; Polivka, T. Using a deterministic time-lagged ensemble forecast with a probabilistic threshold for improving 6–15 day summer precipitation prediction in China. Atmos. Res. 2015, 156, 142–159. [Google Scholar] [CrossRef]
  11. Coiffier, J. Fundamentals of Numerical Weather Prediction; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  12. Mayor, Y.; Mesquita, M. Numerical simulations of the 1 May 2012 deep convection event over Cuba: Sensitivity to cumulus and microphysical schemes in a high-resolution model. Adv. Meteorol. 2015, 2015, 973151. [Google Scholar] [CrossRef]
  13. Maussion, F.; Scherer, D.; Finkelnburg, R.; Richters, J.; Yang, W.; Yao, T. WRF simulation of a precipitation event over the Tibetan Plateau, China—An assessment using remote sensing and ground observations. Hydrol. Earth Syst. Sci. 2011, 15, 1795–1817. [Google Scholar] [CrossRef]
  14. ECMWF. What Is Ensemble Weather Forecasting? 2017. Available online: https://www.ecmwf.int/en/about/media-centre/fact-sheet-ensemble-weather-forecasting (accessed on 24 October 2017).
  15. Glahn, H.R.; Lowry, D.A. The use of model output statistics (MOS) in objective weather forecasting. J. Appl. Meteorol. 1972, 11, 1203–1211. [Google Scholar] [CrossRef]
  16. Scheuerer, M. Probabilistic quantitative precipitation forecasting using ensemble model output statistics. Q. J. R. Meteorol. Soc. 2014, 140, 1086–1096. [Google Scholar] [CrossRef]
  17. Fraley, C.; Raftery, A.; Gneiting, T.; Sloughter, J.; Berrocal, V. Probabilistic weather forecasting in R. R. J. 2011, 3, 55–63. [Google Scholar]
  18. Mugume, I.; Basalirwa, C.; Waiswa, D.; Reuder, J.; Mesquita, M.D.S.; Tao, S.; Ngailo, T. Comparison of parametric and nonparametric methods for analyzing the bias of a numerical model. Model. Simul. Eng. 2016, 2016. [Google Scholar] [CrossRef]
  19. Gneiting, T.; Raftery, A. Weather forecasting with ensemble methods. Science 2005, 310, 248–249. [Google Scholar] [CrossRef] [PubMed]
  20. Hemri, S.; Scheuerer, M.; Pappenberger, F.; Bogner, K.; Haiden, T. Trends in the predictive performance of raw ensemble weather forecasts. Geophys. Res. Lett. 2014, 41, 9197–9205. [Google Scholar] [CrossRef] [Green Version]
  21. Whitaker, J.; Loughe, A. The relationship between ensemble spread and ensemble mean skill. Mon. Weather Rev. 1998, 126, 3292–3302. [Google Scholar] [CrossRef]
  22. Raftery, A.; Gneiting, T.; Balabdaoui, F.; Polakowski, M. Using Bayesian model averaging to calibrate forecast ensembles. Mon. Weather Rev. 2005, 133, 1155–1174. [Google Scholar] [CrossRef]
  23. Segele, Z.; Richman, M.; Leslie, L.; Lamb, P. Seasonal-to-interannual variability of Ethiopia/horn of Africa monsoon. Part II: Statistical multimodel ensemble rainfall predictions. J. Clim. 2015, 28, 3511–3536. [Google Scholar] [CrossRef]
  24. Evans, J.; Ji, F.; Abramowitz, G.; Ekström, M. Optimally choosing small ensemble members to produce robust climate simulations. Environ. Res. Lett. 2013, 8, 044050. [Google Scholar] [CrossRef] [Green Version]
  25. Zhu, J.; Kong, F.; Ran, L.; Lei, H. Bayesian model averaging with stratified sampling for probabilistic quantitative precipitation forecasting in northern China during summer 2010. Mon. Weather Rev. 2015, 143, 3628–3641. [Google Scholar] [CrossRef]
  26. Redmond, G.; Hodges, K.; Mcsweeney, C.; Jones, R.; Hein, D. Projected changes in tropical cyclones over Vietnam and the south China sea using a 25 km regional climate model perturbed physics ensemble. Clim. Dyn. 2015, 45, 1983–2000. [Google Scholar] [CrossRef]
  27. Fritsch, J.; Carbone, R. Improving quantitative precipitation forecasts in the warm season: A USWRP research and development strategy. Bull. Am. Meteorol. Soc. 2004, 85, 955–965. [Google Scholar] [CrossRef]
  28. Hamill, T.; Whitaker, J. Probabilistic quantitative precipitation forecasts based on reforecast analogs: Theory and application. Mon. Weather Rev. 2006, 134, 3209–3229. [Google Scholar] [CrossRef]
  29. Vanvyve, E.; Delle Monache, L.; Monaghan, A.; Pinto, J. Wind resource estimates with an analog ensemble approach. Renew. Energy 2015, 74, 761–773. [Google Scholar] [CrossRef]
  30. Kalnay, E.; Kanamitsu, M.; Kistler, R.; Collins, W.; Deaven, D.; Gandin, L.; Iredell, M.; Saha, S.; White, G.; Woollen, J.; et al. The NCEP/NCAR 40-year reanalysis project. Bull. Am. Meteorol. Soc. 1996, 77, 437–471. [Google Scholar] [CrossRef]
  31. White, D.; Lubulwa, G.; Menz, K.; Zuo, H.; Wint, W.; Slingenbergh, J. Agro-climatic classification systems for estimating the global distribution of livestock numbers and commodities. Environ. Int. 2001, 27, 181–187. [Google Scholar] [CrossRef]
  32. Basalirwa, C. Delineation of uganda into climatological rainfall zones using the method of principal component analysis. Int. J. Climatol. 1995, 15, 1161–1177. [Google Scholar] [CrossRef]
  33. Funk, C.; Hoell, A.; Shukla, S.; Husak, G.; Michaelsen, J. The east African monsoon system: Seasonal climatologies and recent variations. In The Monsoons and Climate Change; Springer: Berlin, Germany, 2016; pp. 163–185. [Google Scholar]
  34. Yang, W.; Seager, R.; Cane, M.; Lyon, B. The annual cycle of East African precipitation. J. Clim. 2015, 28, 2385–2404. [Google Scholar] [CrossRef]
  35. Pizarro, R.; Garcia-Chevesich, P.; Valdes, R.; Dominguez, F.; Hossain, F.; Ffolliott, P.; Olivares, C.; Morales, C.; Balocchi, F.; Bro, P. Inland water bodies in Chile can locally increase rainfall intensity. J. Hydrol. 2013, 481, 56–63. [Google Scholar] [CrossRef]
  36. Von Storch, H.; Zwiers, F.W. Statistical Analysis in Climate Research; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  37. Delle Monache, L.; Eckel, F.; Rife, D.; Nagarajan, B.; Searight, K. Probabilistic weather prediction with an analog ensemble. Mon. Weather Rev. 2013, 141, 3498–3516. [Google Scholar] [CrossRef]
  38. Horvath, K.; Bajić, A.; Ivatek-Šahdan, S.; Hrastinski, M.; Odak Plenković, I.; Stanešić, A.; Tudor, M.; Kovačić, T. Overview of meteorological research on the project “weather intelligence for wind energy”-will4wind. Hrvat. Meteorol. Čas. 2016, 50, 91–104. [Google Scholar]
  39. CPC. Cold and Warm Episodes by Season 2017. Available online: http://www.cpc.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml (accessed on 1 February 2017).
  40. Hafez, Y. Study on the relationship between the oceanic nino index and surface air temperature and precipitation rate over the Kingdom of Saudi Arabia. J. Geosci. Environ. Prot. 2016, 4, 146–162. [Google Scholar] [CrossRef]
  41. Amiri, M.; Mesgari, M. Modeling the Spatial and Temporal Variability of Precipitation in Northwest Iran. Atmosphere 2017, 8, 254. [Google Scholar] [CrossRef]
  42. Franke, R. Scattered data interpolation: Tests of some methods. Math. Comput. 1982, 38, 181–200. [Google Scholar]
  43. Ratna, S.; Ratnam, J.; Behera, S.; Ndarana, T.; Takahashi, K.; Yamagata, T. Performance assessment of three convective parameterization schemes in WRF for downscaling summer rainfall over South Africa. Clim. Dyn. 2014, 42, 2931–2953. [Google Scholar] [CrossRef]
Figure 1. The figure shows the spatial total MAM 2013 seasonal rainfall patterns over the study area (Uganda). This figure is obtained using inverse distance weighted interpolation (IDW) method.
Figure 1. The figure shows the spatial total MAM 2013 seasonal rainfall patterns over the study area (Uganda). This figure is obtained using inverse distance weighted interpolation (IDW) method.
Atmosphere 09 00328 g001
Figure 2. Shows the domains used in running WRF model. Domain, d01 covers the entire Africa at 90 Km horizontal resolution; domain, d02 covers Equatorial Africa at 30 Km horizontal resolution and domain; d03, in red colour covers Uganda, the study region and is at 10 Km horizontal resolution.
Figure 2. Shows the domains used in running WRF model. Domain, d01 covers the entire Africa at 90 Km horizontal resolution; domain, d02 covers Equatorial Africa at 30 Km horizontal resolution and domain; d03, in red colour covers Uganda, the study region and is at 10 Km horizontal resolution.
Atmosphere 09 00328 g002
Figure 3. Shows the spatially interpolated total amount of rainfall simulated by the cumulus parameterization schemes for the MAM 2013 rainfall period using IDW.
Figure 3. Shows the spatially interpolated total amount of rainfall simulated by the cumulus parameterization schemes for the MAM 2013 rainfall period using IDW.
Atmosphere 09 00328 g003
Figure 4. Shows the spatially interpolated RMSE using IDW of cumulus parameterization schemes: KF (a), GF (b), G3 (c) and the RMSE of the ensemble methods: ENS (d) , EMA (e) and MAEM (f). We present the results of these three schemes KF, GF and G3 because they had comparatively smaller RMSE than the other cumulus schemes.
Figure 4. Shows the spatially interpolated RMSE using IDW of cumulus parameterization schemes: KF (a), GF (b), G3 (c) and the RMSE of the ensemble methods: ENS (d) , EMA (e) and MAEM (f). We present the results of these three schemes KF, GF and G3 because they had comparatively smaller RMSE than the other cumulus schemes.
Atmosphere 09 00328 g004
Figure 5. Shows the spatially interpolated mean error (ME) using IDW for cumulus parameterization schemes: KF (a), GF (b), G3 (c) and the ME of the ensemble methods: ENS (d), EMA (e) and MAEM (f). We present the results of these three schemes KF, GF and G3 because they had comparatively smaller magnitudes of ME than the other cumulus schemes.
Figure 5. Shows the spatially interpolated mean error (ME) using IDW for cumulus parameterization schemes: KF (a), GF (b), G3 (c) and the ME of the ensemble methods: ENS (d), EMA (e) and MAEM (f). We present the results of these three schemes KF, GF and G3 because they had comparatively smaller magnitudes of ME than the other cumulus schemes.
Atmosphere 09 00328 g005
Table 1. The RMSE of simulations with the different cumulus parametrization schemes.
Table 1. The RMSE of simulations with the different cumulus parametrization schemes.
RMSE Scores (mm)RMSE Rankings
KFBMJGFG3NTGDKFBMJGFG3NTGD
Arua12.1912.5714.3114.1324.1814.51124365
Buginyanya19.6658.7659.5133.1062.6642.89145263
Bushenyi14.7021.8419.2520.5121.7019.03163452
Entebbe45.3555.9953.0351.9361.2053.22154263
Gulu57.9817.4924.0523.046.5325.21624315
Jinja14.3521.6223.3322.1329.3129.62124356
Kibanda31.408.218.357.627.867.77645132
Kabale8.1915.9510.0411.316.2813.00263415
Kamenyamigo27.9429.3228.6729.8134.5728.28143562
Kasese19.1814.6315.4318.0219.2716.72512463
Kitgum16.4414.6916.3118.3324.7718.32312564
Kituza40.0043.1440.8042.9146.1350.84142356
Lira20.4230.5031.2226.3033.2826.95145263
Makerere33.7643.7742.7141.2946.7837.59154362
Mbarara11.7318.1718.8022.1618.6815.68135642
Masindi19.5927.7526.3326.8338.5132.24142365
Namulonge36.3935.9034.7235.1734.8744.21531426
Ntusi8.859.338.818.208.538.70564123
Serere14.9423.2323.1219.1724.2725.51143256
Soroti16.0827.6225.2518.8428.2425.93153264
Tororo34.1116.3418.7214.7234.1715.37534162
Average23.9626.0425.8524.0729.1326.272.383.713.433.004.713.76
Table 2. The ME of the simulations with the different cumulus parametrization schemes.
Table 2. The ME of the simulations with the different cumulus parametrization schemes.
ME Scores (mm)ME Rankings
KFBMJGFG3NTGDKFBMJGFG3NTGD
Arua−0.890.911.831.20−4.49−1.97124365
Buginyanya−2.08−11.77−11.93−6.00−12.54−8.31145263
Bushenyi−2.19−4.03−3.22−3.78−3.99−3.34162453
Entebbe−9.06−11.36−10.73−10.50−12.44−10.77153264
Gulu11.763.394.764.500.644.93624315
Jinja−2.15−4.14−4.58−4.31−5.89−5.93124356
Kibanda6.22−0.040.53−0.18−0.080.34615324
Kabale0.832.751.651.900.312.16263415
Kamenyamigo−5.31−5.58−5.50−5.76−6.78−5.46143562
Kasese−3.55−2.39−2.68−3.31−3.76−2.98512463
Kitgum−2.90−2.54−2.93−3.38−4.80−3.36213564
Kituza−7.94−8.57−8.08−8.53−9.23−10.24142356
Lira−3.43−5.85−6.00−4.94−6.51−5.11145263
Makerere−6.74−8.75−8.51−8.22−9.39−7.39154362
Mbarara1.30−2.96−3.17−4.02−3.29−2.51134652
Masindi−2.34−4.63−4.29−4.50−7.26−5.78142365
Namulonge−7.22−7.06−6.82−6.89−6.83−8.81541326
Ntusi0.32−0.98−0.54−0.55−0.480.47164532
Serere−2.61−4.53−4.51−3.66−4.78−5.04143256
Soroti−2.34−5.30−4.63−3.35−5.41−4.98153264
Tororo6.30−1.40−2.51−1.80−6.58−1.49514362
Average−1.62−4.04−3.90−3.62−5.41−4.072.143.523.333.334.763.90
Table 3. Shows the RMSE for: ensemble mean (ENS); ensemble mean analogue (EMA) and multi–member analogue ensemble (MAEM). The italicized values show improvement in the RMSE of the EMA and the RMSE of the MAEM compared to the RMSE of the ENS. The bold values in the 4th column show a reduction in RMSE using MAEM compared to the RMSE of EMA.
Table 3. Shows the RMSE for: ensemble mean (ENS); ensemble mean analogue (EMA) and multi–member analogue ensemble (MAEM). The italicized values show improvement in the RMSE of the EMA and the RMSE of the MAEM compared to the RMSE of the ENS. The bold values in the 4th column show a reduction in RMSE using MAEM compared to the RMSE of EMA.
StationRMSE (mm)
ENSEMAMAEM
Arua10.9310.7010.85
Buginyanya15.2315.3615.02
Bushenyi9.569.339.38
Entebbe11.5111.2810.98
Gulu7.787.768.04
Jinja7.237.076.99
Kabale5.815.925.87
Kamenyamigo10.8410.6710.70
Kasese8.378.218.28
Kibanda6.916.896.90
Kitgum8.348.968.32
Kituza11.7111.6511.25
Lira10.9711.810.76
Makerere11.0210.8310.63
Masindi15.8415.8215.80
Mbarara10.019.9910.00
Namulonge11.2011.1110.80
Ntusi7.437.457.40
Serere7.307.627.18
Soroti9.8310.169.84
Tororo12.6512.8212.99
Table 4. Shows the ME (or Bias) for: ensemble mean (ENS); ensemble mean analogue (EMA) and multi–member analogue ensemble mean (MAEM). The bold values indicate a magnitude of ME less than 2.00 for all the three methods.
Table 4. Shows the ME (or Bias) for: ensemble mean (ENS); ensemble mean analogue (EMA) and multi–member analogue ensemble mean (MAEM). The bold values indicate a magnitude of ME less than 2.00 for all the three methods.
StationMean Error (or Bias in mm)
ENSEMAMAEM
Arua0.421.452.61
Buginyanya−2.92−2.02−0.54
Bushenyi−1.62−1.04−1.06
Entebbe−5.20−1.18−1.94
Gulu4.084.265.13
Jinja−1.60−0.780.08
Kabale0.571.001.25
Kamenyamigo−2.59−2.35−2.19
Kasese−1.49−1.04−0.76
Kibanda0.890.561.15
Kitgum−1.020.150.94
Kituza−4.01−1.52−0.43
Lira−1.960.610.63
Makerere−3.92−2.50−1.39
Masindi−1.49−1.03−0.54
Mbarara−0.69−0.71−0.63
Namulonge−3.27−2.13−1.63
Ntusi0.03−0.030.05
Serere−1.150.170.34
Soroti−0.940.521.39
Tororo1.081.321.89

Share and Cite

MDPI and ACS Style

Mugume, I.; Mesquita, M.D.S.; Bamutaze, Y.; Ntwali, D.; Basalirwa, C.; Waiswa, D.; Reuder, J.; Twinomuhangi, R.; Tumwine, F.; Jakob Ngailo, T.; et al. Improving Quantitative Rainfall Prediction Using Ensemble Analogues in the Tropics: Case Study of Uganda. Atmosphere 2018, 9, 328. https://doi.org/10.3390/atmos9090328

AMA Style

Mugume I, Mesquita MDS, Bamutaze Y, Ntwali D, Basalirwa C, Waiswa D, Reuder J, Twinomuhangi R, Tumwine F, Jakob Ngailo T, et al. Improving Quantitative Rainfall Prediction Using Ensemble Analogues in the Tropics: Case Study of Uganda. Atmosphere. 2018; 9(9):328. https://doi.org/10.3390/atmos9090328

Chicago/Turabian Style

Mugume, Isaac, Michel D. S. Mesquita, Yazidhi Bamutaze, Didier Ntwali, Charles Basalirwa, Daniel Waiswa, Joachim Reuder, Revocatus Twinomuhangi, Fredrick Tumwine, Triphonia Jakob Ngailo, and et al. 2018. "Improving Quantitative Rainfall Prediction Using Ensemble Analogues in the Tropics: Case Study of Uganda" Atmosphere 9, no. 9: 328. https://doi.org/10.3390/atmos9090328

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop