Next Article in Journal
An Overview of Statistical Methods for Studying the Extreme Rainfalls in Mediterranean
Previous Article in Journal
Evaluation of Drop Size Distribution Impact on Rainfall Interception by Trees
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Sensitivity Assessment of WRF Parameterizations over Europe †

by
Ioannis Stergiou
1,
Efthimios Tagaris
1,2 and
Rafaella-Eleni P. Sotiropoulou
1,2,*
1
Department of Mechanical Engineering, University of Western Macedonia, 50132 Kozani, Greece
2
Department of Environmental Engineering, University of Western Macedonia, 50132 Kozani, Greece
*
Author to whom correspondence should be addressed.
Presented at the 2nd International Electronic Conference on Atmospheric Sciences, 16–31 July 2017; Available online: http://sciforum.net/conference/ecas2017.
Proceedings 2017, 1(5), 119; https://doi.org/10.3390/ecas2017-04138
Published: 17 July 2017

Abstract

:
Evaluation of the performance of the parameterization schemes used in the WRF model is assessed for temperature and precipitation over Europe at 36 km by 36 km grid resolution using gridded data from the ECA & D 0.25° regular grid. Simulations are performed for a winter (i.e., January 2015) and a summer (i.e., July 2015) month using the two way nesting approach. A step-wise decision approach is followed, beginning with 18 simulations for the various microphysics schemes followed by 45 more, concerning all of the model’s PBL, Cumulus, Long-wave, Short-wave and Land Surface schemes. The best performing scheme at each step is chosen by integrating the entropy weighting method ‘Technique for Order Performance by Similarity to Ideal Solution’ (TOPSIS). The concluding scheme set consists of the Mansell-Ziegler-Bruning microphysics scheme, the Bougeault-Lacarrere PBL scheme, the Kain-Fritsch cumulus scheme, the RRTMG scheme for short-wave, the New Goddard for long-wave radiation and a seasonal-variable sensitive option for the Land Surface scheme.

1. Introduction

The Advanced Research Weather Research and Forecasting model (ARW-WRF, hereafter WRF) [1] is a nonhydrostatic mesoscale numerical weather prediction system, that includes a wide range of physical parameterizations and it can be initialized either by data from a GCM or by reanalysis data. It is an ideal tool for studying phenomena that require high spatial resolution. WRF applications only use a single set of parameterization schemes due to the computational cost of running all possible combinations. Choosing the best performing set of parameterizations is challenging because their performance is highly spatial and time dependent. A significant number of studies have been conducted, exploring WRF sensitivity to different parameterization schemes e.g., [2,3,4,5,6].
Mooney et al. [2] evaluated the sensitivity of WRF to several parameterization schemes for regional climates of Europe over the period 1990–1995. Their results for temperature show a significant dependence on the land surface model, while averaged daily precipitation levels appear to be relatively insensitive to the longwave radiation scheme chosen. They conclude that modelling precipitation is problematic for WRF with biases of up to 100%. Borge et al. [3] studied WRF sensitivity over the Iberian Peninsula for two 1-week periods in the winter and summer of 2005. Their findings suggest that no particular scheme or option produces the best results for all the statistical parameters and/or geographical locations examined. The optimum configuration they provided for the model is based on aggregated performance. Bukovsky and Karoly [4] examined how different land surface models and cumulus schemes affect precipitation over North America for May, June, July, and August over the period 1991–1995. Their results showed that precipitation was sensitive to the choice of land surface model and cumulus scheme, emphasizing the importance of testing WRF output for sensitivity to parameterizations for regional climate modelling applications. Jin et al. [5] also presented a sensitivity study of four land surface schemes in the WRF model over the western US. Their simulation period covered a year from 1 October 1995 to 30 September 1996, resulting in acknowledging the strong effect that land surface processes have on temperature and their poor effect on precipitation which is overestimated by the model. Flaounas et al. [6] examined how convection and planetary boundary layer (PBL) parameterization affect the sensitivity of WRF in a study of the 2006 West African monsoon. Their results show that PBL schemes have the strongest effect on the vertical distribution of temperature, humidity, and rainfall amount, whereas precipitation variability is particularly sensitive to convection parameterization schemes.
The objective of this study is to assess the sensitivity of WRF parameterizations over Europe at a 36 × 36 km grid cell resolution and produce a final parameterization combination that performs best for the whole European region. The long term purpose of this study is to calibrate the RCM to its best possible performing set up in order to be used for downscaling GCM data.

2. Method

2.1. Modelling Domains and Initialization

The Weather Research and Forecasting (WRF) [1] version 3.7.1 is used, here, to dynamically downscale the ENSEMBLES daily gridded observational dataset (E-OBS) [7,8] in a nesting approach over Europe in order to assess the model’s sensitivity to different parameterization set ups by examining its ability to reproduce spatial patterns of the mean temperature and precipitation over Europe. Due to the computationally prohibitive nature of running WRF the simulations are performed for a winter and a summer month (i.e., January and July 2015). The dynamical downscaling approach is following the two way nesting approach with grid resolutions of 108 km and 36 km with the finer nested domain covering the European region (Figure 1).
The initial set of simulations concerned the Microphysics parameterization schemes with all other parameterizations at their default values. The second simulation group explored the effect of the PBL schemes since it has no direct interactions with microphysics [9] (Figure 2), followed by the Cumulus parameterizations which do not interact with PBL, the Longwave and Shortwave radiation schemes being independent of the previous ones and finally Land Surface schemes. Our simulation groups include most of the existing options the WRF model can offer. Any options that are not included in this study were either extremely time consuming, not being able to run with the model’s multi-core mode, or did not produce an hourly output so they were excluded on the basis of not being on the same time scale with the rest.
At the end of each simulation group, statistical measures for model’s performance were calculated (Table 1) as well as a spatial distribution map of the mean bias was created. The estimation of these measures was conducted by comparison of the model’s mean daily output to the E-OBS dataset from the EU-FP6 project ENSEMBLES provided by the ECA & D project for every grid cell (http://www.ecad.eu).
In order to identify the best parameterization option for each simulation group the TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution) method was utilized. It is a multi-criteria decision analysis method summarized below. Our decision making approach focused on mean temperature prediction, being the variable best forecasted by numeric models and additionally, the effects of our scheme choices on precipitation were also assessed.

2.2. Technique for Order Preference by Similarity to the Ideal Solution

The TOPSIS method was first developed by Hwang and Yoon [10] with further developments by Yoon [11] and Hwang et al. [12]. It ranks the alternatives according to their distances from the ideal and the negative ideal solution, i.e., the best alternative has simultaneously the shortest distance from the ideal solution and the farthest distance from the negative ideal solution. Some of the advantages of TOPSIS methods are: simplicity, rationality, comprehensibility, good computational efficiency and ability to measure the relative performance for each alternative in a simple mathematical form. TOPSIS is a method of compensatory aggregation that compares a set of alternatives by identifying weights for each criterion, normalizing scores for each criterion and calculating the geometric distance between each alternative and the ideal alternative, which is the best score in each criterion. The TOPSIS process is carried out as follows:
Step 1:
Creating an evaluation matrix consisting of m alternatives and n criteria.
Step 2:
Normalizing the evaluation matrix.
Step 3:
Calculating a weighted normalized decision matrix by determining the weights of the various factors. In this study Shannon’s entropy theory [13] was adopted in order to calculate the weighting factors.
Step 4:
Determining the positive-ideal solution (PIS) and the negative-ideal solution (NIS) by defining each of the criteria in use as positive or negative.
Step 5:
Calculating the distance between the target alternative and (PIS) and the distance between the target alternative and (NIS).
Step 6:
Calculating the Closeness Coefficient (CC) of each alternative. The coefficient is defined to determine the ranking order of each alternative.
Step 7:
Determining the ranking order of all alternatives according to the closeness to the ideal solution which is based on the criteria we have inserted in the method and selecting the best or the worst one from the set of feasible alternatives.

3. Results and Discussion

Τhe various options of Microphysics parameterization schemes were assessed firstly keeping all other model options at their default values. The statistical measures were calculated for each simulation and were used as input for the multi-criteria ranking method. The TOPSIS ranking results as well as the statistical measures are shown in Table 2. Option 17,the NSSL 2-moment Scheme [14], has been chosen as the best Microphysics parameterization scheme: it is ranking 1st for temperature in July and 3rd for temperature in January. However, the two better schemes for temperature in January (i.e., CAM V5.1 2-moment 5-class Scheme and SBU Stony—Brook University Scheme) are not so good for temperature in July. In addition, option 17 presents one of the best performances for predicting mean precipitation for January. However, the selected scheme is not one of the best for predicting precipitation in July since this month has very low and location dependant precipitation rates in Europe. The NSSL 2-mom is a double moment scheme for cloud droplets, rain drops, ice crystals, snow, graupel, and hail, which has one prediction equation for mass mixing ratio (kg/kg) per species (Qrain, Qsnow, etc.) and a prediction equation for number concentration (#/kg) per species (Nrain, Nsnow, etc.).
With the Microphysics option set to 17 we conducted the second set of simulations, assessing the PBL options provided by the WRF model. PBL options only work with certain Surface Layer options in the model so there were specific combinations of PBL/Surface Layer schemes to be used as presented in Table 3. A PBL scheme’s purpose is to distribute surface fluxes with boundary layer eddy fluxes and allow for PBL growth by entrainment. There are 2 classes of PBL schemes
-
Turbulent kinetic energy prediction (Mellor-Yamada-Janjic, MYNN, Bougeault-Lacarrere, TEMF, QNSE, CAM UW)
-
Diagnostic non-local (YSU, GFS, MRF, ACM2)
The Surface Layer schemes use similarity theory to determine exchange coefficients and diagnostics of 2 m temperature. Moisture and 10 m winds. They provide the exchange coefficient to the land-surface models, the friction velocity to the PBL scheme and surface fluxes over water points. These schemes have variations in their stability functions and roughness lengths. Τhe best performing options for temperature was option 8 for PBL which corresponds to the Bougeault-Lacarrere Scheme [29] in combination with option 1 meaning the MM5 Similarity [30] Surface Layer scheme. However, these options are not the best for precipitation. The Bougeault-Lacarrere Scheme is a turbulent kinetic energy (TKE) prediction scheme while the MM5 Similarity is based on Monin-Obukhov with Carslon-Boland viscous sub-layer and standard similarity functions.
The next simulation group focused on the Cumulus parameterization schemes shown in Table 4. Convective parameterization schemes were designed to reduce atmospheric instability in the model. Prediction of precipitation is actually just a by-product of the way in which a scheme does this. Consequently, these schemes may not predict the location and timing of convective precipitation as well as we might expect. For climate models, the location and timing of precipitation is less important than for weather forecast models. The scheme that performed best for temperatures were the model’s default Kain-Fritsch Scheme [31] (option 1) as well as the OSAS Old Simplified Arakawa-Schubert [32] (option 4). We decided to keep the model’s default Kain-Fritsch Scheme (option 1) for the next simulation group since it is better for winter precipitation, as well. The Kain-Fritsch Scheme is a deep and shallow convection sub-grid scheme using a mass flux approach with downdrafts and CAPE removal time scale. It includes cloud, rain, ice and snow detrainment.
Longwave radiation schemes were the simulation group that followed. These schemes compute clear-sky and cloud upward and downward radiation fluxes and they consider IR emission from layers. Surface emissivity is based on land-type and flux divergence leads to cooling in a layer while downward flux at the surface is important in the land energy budget. IR radiation generally leads to cooling in clear air (~2 K/day), stronger cooling at cloud tops and warming at cloud base. The options provided by the model are shown in Table 5. Looking at the ranking of the simulations that took place, the RRTMG Fast version Longwave Scheme [33] (option 24) had the top ranking in predicting mean temperature for both January and July and a relatively high ranking concerning precipitation in January. The RRTMG scheme is actually a new version of Rapid Radiative Transfer Model including the Monte Carlo Independent Column Approximation (MCICA) [34] method of random cloud overlap.
The Shortwave radiation schemes simulation group was assessed next according to the options of Table 6. They compute clear sky and cloudy solar fluxes, including the annual and diurnal solar cycle. Most of them consider downward and upward (reflected) fluxes (Dudhia scheme only has downward flux). They consider primarily a warming effect in clear sky and they are a very important component of surface energy balance. The New Goddard Shortwave Scheme [35] (option 5) is one of the best schemes for simulating temperature for both months. However, this scheme is not among the best schemes for precipitation prediction.
Table 3. Statistical measures and TOPSIS ranking for the PBL/Surface Layer simulation group.
Table 3. Statistical measures and TOPSIS ranking for the PBL/Surface Layer simulation group.
OptionPBL/Surface Layer SchemeMean BiasRoot Square ErrorIndex of AgreementMean Absolute ErrorTOPSIS Ranking
TempPrecTempPrecTempPrecTempPrecTempPrec
JANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJUL
1/1YSU/MM5 Yonsei University Scheme [36]/MM5 [30]0.170.830.03−0.122.591.833.363.830.970.980.810.731.861.401.561.912226
2/2MYJ/Eta Mellor-Yamada-Janjic Scheme [37]/Eta [38]0.331.270.09−0.622.612.133.384.000.970.970.800.721.901.671.562.151111813
4/4QNSE/QNSE Quasi-normal Scale Elimination Scheme [39]0.411.92−0.03−0.672.632.603.484.260.970.950.800.701.942.141.622.261418614
5/1MYNN2/MM5 Mellor-Yamada Nakanishi Niino Level 2.5 [40]/[30]0.360.930.11−0.202.631.903.363.770.970.970.800.731.931.461.561.91125114
5/2MYNN2/Eta [40]/[38]0.851.580.22−0.282.862.4810.115.940.960.960.290.512.151.922.432.2218161816
5/5MYNN2/MYNN [40]0.291.510.11−0.442.632.283.353.870.970.960.810.731.911.811.562.048121210
6/1MYNN3/MM5 Mellor-Yamada Nakanishi Niino Level 3 [41]/[30]0.380.960.10−0.182.631.913.373.820.970.970.800.731.931.471.561.9313797
6/2MYNN3/Eta [41]/[38]0.831.560.11−0.282.842.4311.466.200.960.960.250.502.131.892.492.2517151717
6/5MYNN3/MYNN [41]0.311.550.10−0.412.622.293.363.960.970.960.810.721.911.841.562.079131012
7/1ACM2/MM5 Asymmetric Convection Model 2 Scheme [42]/[30]0.221.08−0.030.042.591.963.373.770.970.970.810.731.881.541.581.817932
7/7ACM2/Pleim-Xiu [42]/[43]0.221.58−0.11−0.262.592.323.483.960.970.960.800.721.881.891.631.97614139
8/1BouLac/MM5 Bougeault-Lacarrere Scheme [29]/[30]0.0020.69−0.07−0.282.611.793.404.000.970.980.810.721.851.361.602.0211711
8/2BouLac/Eta [29]/[38]0.822.080.34−0.032.983.188.606.730.960.930.330.422.222.412.512.3516191918
9/1UW/MM5 University of Washington Scheme [44]/[30]0.320.930.12−0.122.591.893.363.730.970.970.800.731.881.451.551.87106143
9/2UW/Eta [44]/[38]0.941.680.33−0.132.932.558.755.110.960.950.330.582.201.992.392.1519171615
10/10TEMF/TEMF Surface Layer Scheme [45]0.650.97−0.77−3.222.772.164.629.810.960.970.730.462.111.672.074.3915101519
11/1Shin-Hong/MM5 Scale-aware Scheme [46]/[30]0.200.850.04−0.112.591.853.363.820.970.980.810.731.861.411.561.915345
12/1GBM/MM5 Grenier-Bretherton-McCaa Scheme [47]/[30]0.170.990.05−0.182.571.943.363.850.970.970.810.721.851.491.561.933858
99/1MRF/MM5 [48]/[30]−0.190.860.020.132.671.883.313.640.960.980.810.741.881.451.551.764411
Table 4. Statistical measures and TOPSIS ranking for the Cumulus simulation group.
Table 4. Statistical measures and TOPSIS ranking for the Cumulus simulation group.
OptionCumulus SchemeMean BiasRoot Square ErrorIndex of AgreementMean Absolute ErrorTOPSIS Ranking
TempPrecTempPrecTempPrecTempPrecTempPrec
JANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJUL
1Kain-Fritsch Scheme [31]0.0020.691−0.073−0.2822.6081.7913.4044.0010.9660.9770.8140.7151.8541.3571.5952.0203127
2BMJ Betts-Miller-Janjic Scheme [37]−0.0040.7180.0070.2222.6081.8013.3453.4430.9660.9770.8140.7531.8511.3661.5711.6518235
3GF Grell-Freitas Ensemble Scheme [49]0.0030.902−0.0870.1662.6071.8663.4273.6680.9660.9750.8100.7361.8491.4381.6031.7285672
4OSAS Old Simplified Arakawa-Schubert [32]−0.0010.8270.0060.3352.6081.8353.3383.4110.9660.9760.8160.7511.8501.4011.5591.6111366
5G3 Grell 3D Ensemble Scheme [50]−0.0020.923−0.1100.2032.6081.8733.3853.5300.9660.9750.8170.7491.8501.4461.5901.6544884
6Tiedtke Scheme [51]−0.0030.8510.0110.5872.6191.8393.3203.4510.9650.9760.8200.7421.8601.4081.5441.57274110
14NSAS New Simplified Arakawa-Schubert [52]−0.0080.902−0.1520.3842.6091.8653.4203.9110.9660.9760.8160.6981.8511.4391.6161.7149598
16New Tiedtke Scheme [53]0.0460.9600.2290.5142.6201.8713.2693.2680.9650.9750.8110.7631.8651.4481.4961.544109109
93GD Grell-Devenyi Ensemble Scheme [50]−0.0010.9130.0460.1712.6061.8713.3833.6170.9660.9750.8020.7371.8491.4421.5771.7212753
99old KF Old Kain-Fritsch Scheme [54]0.0030.991−0.038−0.0272.6041.9123.3553.7310.9660.9740.8140.7281.8471.4781.5791.89161041
Table 5. Statistical measures and TOPSIS ranking for the Longwave Radiation simulation group.
Table 5. Statistical measures and TOPSIS ranking for the Longwave Radiation simulation group.
OptionLongwave SchemeMean BiasRoot Square ErrorIndex of AgreementMean Absolute ErrorTOPSIS Ranking
TempPrecTempPrecTempPrecTempPrecTempPrec
JANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJUL
1RRTM Longwave Scheme [55]−0.002−0.6910.0730.2822.6081.7913.4044.0010.9660.9770.8140.7151.8541.3571.5952.0203244
3CAM Longwave Scheme [56]−0.647−1.1160.0750.1352.7171.9513.3913.8280.9630.9730.8130.7242.0621.5221.5921.9357651
4RRTMG Longwave Scheme [33]−0.319−0.7910.0540.2382.6051.8093.3913.9410.9660.9760.8140.7181.9061.3771.5821.9964422
5New Goddard Longwave Scheme [35]−0.479−0.9360.0820.2402.5991.8583.3913.9290.9670.9750.8140.7201.9331.4291.5901.9946563
7FLG Fu-Liou-Gu Longwave [57] −0.335−1.9810.023−0.3082.6153.1213.4214.2140.9660.9310.8100.6461.9162.3571.6092.0335716
24RRTMG Fast Version−0.247−0.3790.0540.3232.5821.6793.3924.0340.9670.9800.8140.7121.8801.2491.5822.0511135
31Held-Suarez Relaxation Longwave −11.015−10.658−0.406−0.79711.94711.1233.3523.4430.6240.5840.7930.71411.16310.6651.5221.6168888
99GFDL Longwave Scheme [58]−0.217−0.7670.1020.3242.5861.7873.3894.0890.9670.9770.8150.7081.8781.3571.5962.0642377
Table 6. Statistical measures and TOPSIS ranking for the Shortwave Radiation simulation group.
Table 6. Statistical measures and TOPSIS ranking for the Shortwave Radiation simulation group.
OptionShortwave SchemeMean BiasRoot Square ErrorIndex of AgreementMean Absolute ErrorTOPSIS Ranking
TempPrecTempPrecTempPrecTempPrecTempPrec
JANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJUL
1Dudhia Shortwave Scheme [59]−0.247−0.3790.0540.3232.5821.6793.3924.0340.9670.9800.8140.7121.8801.2491.5822.0517633
2GFSC Goddard Shortwave Scheme [35]0.2180.4930.0870.8422.5411.8733.4024.8410.9690.9730.8150.6691.7911.4781.5932.4356788
3CAM Shortwave Scheme [56]−0.012−0.2150.0560.4272.5361.7113.3894.2060.9680.9780.8150.7031.8011.3001.5822.1391445
4RRTMG Shortwave Scheme [33]−0.192−0.1160.0790.5742.5181.7603.3854.5040.9690.9770.8160.6891.8101.3581.5822.2635167
5New Goddard Shortwave Scheme [35]0.0710.1800.0610.5412.5261.7093.3904.3200.9690.9790.8150.6951.7841.3051.5832.2042356
7FLG Fu-Liou-Gu Shortwave Scheme [57]−1.474−7.3620.053−0.4733.5927.8843.4063.5400.9300.6850.8130.7262.7547.3691.5891.7008824
24RRTMG Fast Version−0.133−0.1730.0790.2162.5061.7273.3834.0810.9700.9780.8160.7101.7951.3151.5822.0364272
99GFDL Shortwave Scheme [58]0.123−0.3560.0430.1532.5731.7253.3903.5830.9670.9780.8140.7461.8311.3111.5791.8713511
Table 7. Statistical measures and TOPSIS ranking for the Land Surface simulation group.
Table 7. Statistical measures and TOPSIS ranking for the Land Surface simulation group.
OptionLand Surface SchemeMean BiasRoot Square ErrorIndex of AgreementMean Absolute ErrorTOPSIS Ranking
TempPrecTempPrecTempPrecTempPrecTempPrec
JANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJUL
15-layer Thermal Diffusion [60]−0.071−0.180−0.061−0.5412.5261.7093.3904.3200.9690.9790.8150.6951.7841.3051.5832.2042331
2Unified Noah Land Surface Model [61]0.5850.33560.11081.24492.53081.45182.86694.93010.97190.98210.85720.66841.78341.12921.44642.73213415
3RUC Land Surface Model [62]1.30320.16760.17230.91272.80251.41322.88474.59810.96690.98360.85790.68742.04711.10561.46542.49944143
4Noah-MP Land Surface Model [63]1.29210.17950.11750.8292.93211.46242.89764.48260.96470.98160.8570.69582.10291.14591.46182.43045222
7Pleim-Xiu Land Surface Model [64]0.12060.68080.26151.09012.29811.9342.98174.75040.97290.96480.8520.67651.61981.43841.51622.63881554
The final simulation group involved Land Surface parameterization schemes shown in Table 7. A Land-Surface model predicts soil temperature and soil moisture in layers (4 for Noah and NoahMP, 6 for RUC, 2 for Pleim-Xiu) and snow water equivalent on ground. It also may predict canopy moisture only (Noah, NoahMP). The results show that land surface processes strongly affect temperature simulations which is a conclusion consistent with previous studies [5], while precipitation remains relatively unaffected. Scheme performances varied, revealing their seasonal dependence. For winter temperature the Pleim-Xiu Land Surface Model [64] had the best statistical results, while the scheme performed poorly for summer mean temperature where the RUC Land Surface Model [62] performed best. The Pleim-Xiu Land Surface Model is a two-layer scheme with vegetation and sub-grid tiling, while the RUC Land Surface Model predicts soil temperature and moisture in six layers using multi-layer snow and frozen soil physics. Regarding precipitation, the Unified Noah Land Surface Model [61] gave the best results for January while performing the worst for July, where the default 5-layer Thermal Diffusion [60] presented the best results.
Spatial mean bias plots using the best option of all the schemes examined above are presented for temperature (Figure 3) and precipitation (Figure 4) along with the initial plots using model’s default options. These plots will allow to assess spatial improvements for each option selected.
The approach followed here greatly increases the model’s prediction ability for temperature (Figure 3). Initial January simulations show significant deviations from the observed values with underestimations in central-east Europe, northern and central Italy, Greece and the Iberian up to three degrees Celsius. Overestimations are located mostly in Scandinavia reaching five degrees Celsius. Underestimations were also presented for almost all continental Europe in the initial July simulation reaching 4–5 °C in the Iberian Peninsula, France and Italy. Looking at the final simulations, it is obvious that almost all of the model’s intense failures have disappeared. There is a convergence of the grid deviations and a general smoothing without severe failures. The confined regions for model’s underestimation in January are located in central and northern Italy as well as the far east end of Europe, while model’s overestimation is found again in Scandinavia. July prediction remains poor in a very small region of central Italy and north Spain with a relatively significant underestimation, while overestimation is found in south Hungary and in the Balkans, locally.
There is no particular model deviation trend for the precipitation during January. However, significant underestimation is noticed locally in central UK, central Italy and Greece (Figure 4) and overestimation in central and North UK, north Italy, east Scandinavia and some parts of the Balkans. During July underestimation is noticed in central and Eastern Europe, locally while overestimation is found in Italy, west Greece and eastern Spain, locally. Although the strategy we pursued had the improvement of the temperature forecast as a central axis, we can see that the forecast for average precipitation has also improved to a certain extent.

4. Conclusions

PBL Bougeault-Lacarrere Scheme [29] in cooperation with the MM5 [30] Surface Layer Scheme had the best performance in predicting January and July temperature and a moderate rank for precipitation. The Yonsei University Scheme [36] is the second best choice as far as temperature prediction is concerned and winter precipitation too. If our strategy had precipitation prediction as its main axis, then the MRF/MM5 [48]/[30] combination (option 99) would be the choice we would have made.
The default Kain-Fritsch Scheme [31] gave the best results as the Cumulus parameterization scheme similar to the OSAS Old Simplified Arakawa-Schubert [32] ranking but the first was our shceme of choice as it performed better for January precipitation.
RRTMG Longwave fast version Scheme [33] scored the highest for temperature prediction and moderately for precipitation. The non-fast version of the RRTMG scheme would be our choice if our steps were precipitation driven. For shortwave radiation scheme we chose the New Goddard [35] which had a similar performance with the CAM Scheme [56]. The spatial distribution improvement of the New Goddard scheme was far better for July temperature prediction establishing it as our choice. The GFDL Shortwave Scheme [58] had the highest rank in predicting precipitation for both January and July.
Our final simulation group assessed the effect of the Land Surface model Pleim-Xiu Land Surface Model [64] performed best in predicting January temperature but poorly for July where the RUC Land Surface Model [62] produced the best results. As far as precipitation is concerned Unified Noah Land Surface Model [61] and the 5-layer Thermal Diffusion [60] performed best for January and July precipitation, respectively. To set up the model for a multiseasonal downscaling study one should choose the best performing Land Surface model for each season.

Acknowledgments

This work was supported by the EU LIFE CLIMATREE project “A novel approach for accounting & monitoring carbon sequestration of tree crops and their potential as carbon sink areas” (LIFE14 CCM/GR/000635).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Skamarock, W.C.; Klemp, J.B.; Dudhia, J.; Gill, D.O.; Barker, M.; Duda, K.G.; Huang, Y.; Wang, W.; Powers, J.G. A Description of the Advanced Research WRF, Version 3; National Center For Atmospheric Research Boulder Co Mesoscale and Microscale Meteorology Div: Boulder, CO, USA, 2008; pp. 1–113. [Google Scholar]
  2. Mooney, P.A.; Mulligan, F.J.; Fealy, R. Evaluation of the Sensitivity of the Weather Research and Forecasting Model to Parameterization Schemes for Regional Climates of Europe over the Period 1990–95. J. Clim. 2013, 26, 1002–1017. [Google Scholar] [CrossRef]
  3. Borge, R.; Alexandrov, V.; José del Vas, J.; Lumbreras, J.; Rodríguez, E. A comprehensive sensitivity analysis of the WRF model for air quality applications over the Iberian Peninsula. Atmos. Environ. 2008, 42, 8560–8574. [Google Scholar] [CrossRef]
  4. Bukovsky, M.S.; Karoly, D.J. Precipitation Simulations Using WRF as a Nested Regional Climate Model. J. Appl. Meteorol. Climatol. 2009, 48, 2152–2159. [Google Scholar] [CrossRef]
  5. Jin, J.; Miller, N.L.; Schlegel, N. Sensitivity Study of Four Land Surface Schemes in the WRF Model. Adv. Meteorol. 2010, 2010, 11. [Google Scholar] [CrossRef]
  6. Flaounas, E.; Bastin, S.; Janicot, S. Regional climate modelling of the 2006 West African monsoon: Sensitivity to convection and planetary boundary layer parameterisation using WRF. Clim. Dyn. 2011, 36, 1083–1105. [Google Scholar] [CrossRef]
  7. Haylock, M.R.; Hofstra, N.; Klein Tank, A.M.G.; Klok, E.J.; Jones, P.D.; New, M. A European daily high-resolution gridded data set of surface temperature and precipitation for 1950–2006. J. Geophys. Res. Atmos. 2008, 113. [Google Scholar] [CrossRef]
  8. Van den Besselaar, E.J.M.; Haylock, M.R.; van der Schrier, G.; Klein Tank, A.M.G. A European daily high-resolution observational gridded data set of sea level pressure. J. Geophys. Res. Atmos. 2011, 116. [Google Scholar] [CrossRef]
  9. Dudhia, J. “Overview of WRF Physics”. NCAR. Available online: http://www2.mmm.ucar.edu/wrf/users/tutorial/201601/physics.pdf (accessed on 12 June 2017).
  10. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications: A State-of-the-Art Survey; Springer: New York, NY, USA, 1981. [Google Scholar]
  11. Yoon, K. A Reconciliation Among Discrete Compromise Solutions. J. Oper. Res. Soc. 1987, 38, 277–286. [Google Scholar] [CrossRef]
  12. Hwang, C.-L.; Lai, Y.-J.; Liu, T.-Y. A new approach for multiple objective decision making. Comput. Oper. Res. 1993, 20, 889–899. [Google Scholar] [CrossRef]
  13. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  14. Mansell, E.R.; Ziegler, C.L.; Bruning, E.C. Simulated Electrification of a Small Thunderstorm with Two-Moment Bulk Microphysics. J. Atmos. Sci. 2010, 67, 171–194. [Google Scholar] [CrossRef]
  15. Kessler, E. On the continuity and distribution of water substance in atmospheric circulations. Atmos. Res. 1995, 38, 109–145. [Google Scholar] [CrossRef]
  16. Lin, Y.-L.; Farley, R.D.; Orville, H.D. Bulk Parameterization of the Snow Field in a Cloud Model. J. Clim. Appl. Meteorol. 1983, 22, 1065–1092. [Google Scholar] [CrossRef]
  17. Hong, S.-Y.; Dudhia, J.; Chen, S.-H. A Revised Approach to Ice Microphysical Processes for the Bulk Parameterization of Clouds and Precipitation. Mon. Weather Rev. 2004, 132, 103–120. [Google Scholar] [CrossRef]
  18. Hong, S.-Y.; Lim, J.-O. The {WRF} Single-Moment 6-Class Microphysics Scheme {(WSM6)}. J. Korean Meteor. Soc. 2006, 42, 129–151. [Google Scholar]
  19. Tao, W.-K.; Simpson, J.; McCumber, M. An Ice-Water Saturation Adjustment. Mon. Weather Rev. 1989, 117, 231–235. [Google Scholar] [CrossRef]
  20. Thompson, G.; Field, P.R.; Rasmussen, R.M.; Hall, W.D. Explicit Forecasts of Winter Precipitation Using an Improved Bulk Microphysics Scheme. Part II: Implementation of a New Snow Parameterization. Mon. Weather Rev. 2008, 136, 5095–5115. [Google Scholar] [CrossRef]
  21. Milbrandt, J.A.; Yau, M.K. A Multimoment Bulk Microphysics Parameterization. Part I: Analysis of the Role of the Spectral Shape Parameter. J. Atmos. Sci. 2005, 62, 3051–3064. [Google Scholar] [CrossRef]
  22. Milbrandt, J.A.; Yau, M.K. A Multimoment Bulk Microphysics Parameterization. Part II: A Proposed Three-Moment Closure and Scheme Description. J. Atmos. Sci. 2005, 62, 3065–3081. [Google Scholar] [CrossRef]
  23. Morrison, H.; Thompson, G.; Tatarskii, V. Impact of Cloud Microphysics on the Development of Trailing Stratiform Precipitation in a Simulated Squall Line: Comparison of One- and Two-Moment Schemes. Mon. Weather Rev. 2009, 137, 991–1007. [Google Scholar] [CrossRef]
  24. Eaton, B. “User’s Guide to the Community Atmosphere Model CAM-5.1.”. NCAR. Available online: http://www.cesm.ucar.edu/models/cesm1.0/cam (accessed on 22 May 2017).
  25. Lin, Y.; Colle, B.A. A New Bulk Microphysical Scheme That Includes Riming Intensity and Temperature-Dependent Ice Characteristics. Mon. Weather Rev. 2011, 139, 1013–1035. [Google Scholar] [CrossRef]
  26. Lim, K.-S.S.; Hong, S.-Y. Development of an Effective Double-Moment Cloud Microphysics Scheme with Prognostic Cloud Condensation Nuclei (CCN) for Weather and Climate Models. Mon. Weather Rev. 2010, 138, 1587–1612. [Google Scholar] [CrossRef]
  27. Gilmore, M.S.; Straka, J.M.; Rasmussen, E.N. Precipitation Uncertainty Due to Variations in Precipitation Particle Parameters within a Simple Microphysics Scheme. Mon. Weather Rev. 2004, 132, 2610–2627. [Google Scholar] [CrossRef]
  28. Thompson, G.; Eidhammer, T. A Study of Aerosol Impacts on Clouds and Precipitation Development in a Large Winter Cyclone. J. Atmos. Sci. 2014, 71, 3636–3658. [Google Scholar] [CrossRef]
  29. Bougeault, P.; Lacarrere, P. Parameterization of Orography-Induced Turbulence in a Mesobeta--Scale Model. Mon. Weather Rev. 1989, 117, 1872–1890. [Google Scholar] [CrossRef]
  30. Beljaars, A.C.M. The parametrization of surface fluxes in large-scale models under free convection. Q. J. R. Meteorol. Soc. 1995, 121, 255–270. [Google Scholar] [CrossRef]
  31. Kain, J.S. The Kain–Fritsch Convective Parameterization: An Update. J. Appl. Meteorol. 2004, 43, 170–181. [Google Scholar] [CrossRef]
  32. Pan, H.L. Implementing a Mass Flux Convective Parameterization Package for the NMC Medium Range Forecast Model; NMC Office Note: London, Uk, 1995; Volume 40. [Google Scholar]
  33. Iacono, M.J.; Delamere, J.S.; Mlawer, E.J.; Shephard, M.W.; Clough, S.A.; Collins, W.D. Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res. Atmos. 2008, 113, D13103. [Google Scholar] [CrossRef]
  34. Räisänen, P.; Barker, H.W.; Cole, J.N.S. The Monte Carlo Independent Column Approximation’s Conditional Random Noise: Impact on Simulated Climate. J. Clim. 2005, 18, 4715–4730. [Google Scholar] [CrossRef]
  35. Chou, M.D.; Suarez, M.J. A Solar Radiation Parameterization for Atmospheric Studies; NASA Tech. Memo 104606; NASA: Boulder, CO, USA, 1999; Volume 40.
  36. Hong, S.-Y.; Noh, Y.; Dudhia, J. A New Vertical Diffusion Package with an Explicit Treatment of Entrainment Processes. Mon. Weather Rev. 2006, 134, 2318–2341. [Google Scholar] [CrossRef]
  37. Janjić, Z.I. The Step-Mountain Eta Coordinate Model: Further Developments of the Convection, Viscous Sublayer, and Turbulence Closure Schemes. Mon. Weather Rev. 1994, 122, 927–945. [Google Scholar] [CrossRef]
  38. Janjic, Z.I. The surface layer in the NCEP Eta Model. In Proceedings of the Eleventh Conference on Numerical Weather Prediction, Norfolk, VA, USA, 19–23 August 1996; pp. 354–355. [Google Scholar]
  39. Sukoriansky, S.; Galperin, B.; Perov, V. Application of a new spectral model of stratified turbulence to the atmospheric boundary layer over sea ice. Bound. Layer Meteorol. 2005, 117, 231–257. [Google Scholar] [CrossRef]
  40. Nakanishi, M.; Niino, H. An improved Mellor–Yamada level 3 model: Its numerical stability and application to a regional prediction of advecting fog. Bound. Layer Meteorol. 2006, 119, 397–407. [Google Scholar] [CrossRef]
  41. Nakanishi, M.; Niino, H. Development of an improved turbulence closure model for the atmospheric boundary layer. J. Meteorol. Soc. Jpn. 2009, 87, 895–912. [Google Scholar] [CrossRef]
  42. Pleim, J.E. A Combined Local and Nonlocal Closure Model for the Atmospheric Boundary Layer. Part I: Model Description and Testing. J. Appl. Meteorol. Climatol. 2007, 46, 1383–1395. [Google Scholar] [CrossRef]
  43. Pleim, J.E. A simple, efficient solution of flux-profile relationships in the atmospheric surface layer. J. Appl. Meteorol. Clim. 2006, 45, 341–347. [Google Scholar] [CrossRef]
  44. Bretherton, C.S.; Park, S. A New Moist Turbulence Parameterization in the Community Atmosphere Model. J. Clim. 2009, 22, 3422–3448. [Google Scholar] [CrossRef]
  45. Angevine, W.M.; Jiang, H.; Mauritsen, T. Performance of an Eddy Diffusivity–Mass Flux Scheme for Shallow Cumulus Boundary Layers. Mon. Weather Rev. 2010, 138, 2895–2912. [Google Scholar] [CrossRef]
  46. Shin, H.H.; Hong, S.-Y. Representation of the Subgrid-Scale Turbulent Transport in Convective Boundary Layers at Gray-Zone Resolutions. Mon. Weather Rev. 2015, 143, 250–271. [Google Scholar] [CrossRef]
  47. Grenier, H.; Bretherton, C.S. A Moist PBL Parameterization for Large-Scale Models and Its Application to Subtropical Cloud-Topped Marine Boundary Layers. Mon. Weather Rev. 2001, 129, 357–377. [Google Scholar] [CrossRef]
  48. Hong, S.-Y.; Pan, H.-L. Nonlocal Boundary Layer Vertical Diffusion in a Medium-Range Forecast Model. Mon. Weather Rev. 1996, 124, 2322–2339. [Google Scholar] [CrossRef]
  49. Grell, G.A.; Freitas, S.R. A scale and aerosol aware stochastic convective parameterization for weather and air quality modeling. Atmos. Chem. Phys. 2014, 14, 5233–5250. [Google Scholar] [CrossRef]
  50. Grell, G.A.; Dévényi, D. A generalized approach to parameterizing convection combining ensemble and data assimilation techniques. Geophys. Res. Lett. 2002, 29, 38-1–38-4. [Google Scholar] [CrossRef]
  51. Tiedtke, M. A Comprehensive Mass Flux Scheme for Cumulus Parameterization in Large-Scale Models. Mon. Weather Rev. 1989, 117, 1779–1800. [Google Scholar] [CrossRef]
  52. Han, J.; Pan, H.-L. Revision of Convection and Vertical Diffusion Schemes in the NCEP Global Forecast System. Weather Forecast. 2011, 26, 520–533. [Google Scholar] [CrossRef]
  53. Zhang, C.; Wang, Y.; Hamilton, K. Improved Representation of Boundary Layer Clouds over the Southeast Pacific in ARW-WRF Using a Modified Tiedtke Cumulus Parameterization Scheme. Mon. Weather Rev. 2011, 139, 3489–3513. [Google Scholar] [CrossRef]
  54. Kain, J.S.; Fritsch, J.M. A One-Dimensional Entraining/Detraining Plume Model and Its Application in Convective Parameterization. J. Atmos. Sci. 1990, 47, 2784–2802. [Google Scholar] [CrossRef]
  55. Mlawer, E.J.; Taubman, S.J.; Brown, P.D.; Iacono, M.J.; Clough, S.A. Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res. Atmos. 1997, 102, 16663–16682. [Google Scholar] [CrossRef]
  56. Collins, W.D.; Rasch, P.J.; Boville, B.A.; Hack, J.J.; McCaa, J.R.; Williamson, D.L.; Kiehl, J.T.; Briegleb, B.; Bitz, C.; Lin, S.-J.; et al. Description of the NCAR Community Atmosphere Model (CAM 3.0); NCAR Technical Note: Boulder, CO, USA, 2004. [Google Scholar]
  57. Fu, Q.; Liou, K.N. On the Correlated k-Distribution Method for Radiative Transfer in Nonhomogeneous Atmospheres. J. Atmos. Sci. 1992, 49, 2139–2156. [Google Scholar] [CrossRef]
  58. Fels, S.B.; Schwarzkopf, M.D. An efficient, accurate algorithm for calculating CO2 15 μm band cooling rates. J. Geophys. Res. Oceans 1981, 86, 1205–1232. [Google Scholar] [CrossRef]
  59. Dudhia, J. Numerical Study of Convection Observed during the Winter Monsoon Experiment Using a Mesoscale Two-Dimensional Model. J. Atmos. Sci. 1989, 46, 3077–3107. [Google Scholar] [CrossRef]
  60. Dudhia, J. A multi-layer soil temperature model for MM5. In Proceedings of the Preprints, the Sixth PSU/NCAR Mesoscale Model Users’ Workshop, Boulder, CO, USA, 22–24 July 1996. [Google Scholar]
  61. Tewari, M.; Chen, F.; Wang, W.; Dudhia, J.; LeMone, M.A.; Mitchell, K.; Ek, M.; Gayno, G.; Wegiel, J.; Cuenca, R.H. Implementation and verification of the unified NOAH land surface model in the WRF model. In Proceedings of the 20th Conference on Weather Analysis and Forecasting/16th Conference on Numerical Weather Prediction, Boulder, CO, USA, 10–15 January 2004; pp. 11–15. [Google Scholar]
  62. Benjamin, S.G.; Grell, G.A.; Brown, J.M.; Smirnova, T.G.; Bleck, R. Mesoscale weather prediction with the RUC hybrid isentropic-terrain-following coordinate model. Mon. Weather Rev. 2004, 132, 473–494. [Google Scholar] [CrossRef]
  63. Niu, G.-Y.; Yang, Z.-L.; Mitchell, K.E.; Chen, F.; Ek, M.B.; Barlage, M.; Kumar, A.; Manning, K.; Niyogi, D.; Rosero, E.; et al. The community Noah land surface model with multiparameterization options (Noah-MP): 1. Model description and evaluation with local-scale measurements. J. Geophys. Res. Atmos. 2011, 116, D12109. [Google Scholar] [CrossRef]
  64. Pleim, J.E.; Xiu, A. Development and Testing of a Surface Flux and Planetary Boundary Layer Model for Application in Mesoscale Models. J. Appl. Meteorol. 1995, 34, 16–32. [Google Scholar] [CrossRef]
Figure 1. WRF multinesting domain configuration approach.
Figure 1. WRF multinesting domain configuration approach.
Proceedings 01 00119 g001
Figure 2. Interactions between WRF parameterization schemes.
Figure 2. Interactions between WRF parameterization schemes.
Proceedings 01 00119 g002
Figure 3. Temperature Mean Bias spatial distribution after each simulation step.
Figure 3. Temperature Mean Bias spatial distribution after each simulation step.
Proceedings 01 00119 g003
Figure 4. Precipitation Mean Bias spatial distribution after each simulation step.
Figure 4. Precipitation Mean Bias spatial distribution after each simulation step.
Proceedings 01 00119 g004
Table 1. Statistical measures representing each simulation.
Table 1. Statistical measures representing each simulation.
MeasureFormula
Mean bias i = 1 n ( X p r e d i c t e d X o b s e r v e d ) n
Root square error i = 1 n ( X p r e d i c t e d X o b s e r v e d ) 2 n
Index of agreement 1 i = 1 n ( X p r e d i c t e d X o b s e r v e d ) 2 i = 1 n ( | X p r e d i c t e d X ¯ o b s e r v e d | + | X o b s e r v e d X ¯ o b s e r v e d | ) 2
Mean absolute error i = 1 n | X p r e d i c t e d X o b s e r v e d | n
Table 2. Statistical measures and TOPSIS ranking for the Microphysics simulation group.
Table 2. Statistical measures and TOPSIS ranking for the Microphysics simulation group.
OptionMicrophysics SchemeMean BiasRoot Square ErrorIndex of AgreementMean Absolute ErrorTOPSIS Ranking
TempPrecTempPrecTempPrecTempPrecTempPrec
JANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJULJANJUL
1Kessler Scheme [15]−0.41−1.40−0.51−0.372.692.193.543.860.970.970.790.711.981.771.561.8116171717
2Lin et al. Scheme [16]−0.40−1.030.12−0.022.611.933.723.950.970.970.800.731.911.511.681.8715892
3WSM3 Single-moment 3-class Scheme [17]−0.87−1.200.09−0.092.742.013.653.780.960.970.800.742.111.601.661.82171584
4WSM5 Single-moment 5-class Scheme [17]−0.25−1.200.13−0.102.612.013.623.820.970.970.810.741.891.601.671.821216106
6WSM6 Single-moment 6-class Scheme [18]−0.25−1.070.13−0.092.611.953.643.830.970.970.810.731.891.531.681.82710115
7Goddard Scheme [19]−0.21−1.120.07−0.112.611.993.483.820.970.970.810.731.881.571.611.8151377
8Thompson Scheme [20]−0.24−0.89−0.040.162.611.873.443.950.970.980.810.721.891.431.581.9466614
9Milbrandt-Yau Double Moment Scheme [21,22]−0.25−0.930.190.172.611.893.514.000.970.970.820.721.901.461.661.951871416
10Morrison 2-moment Scheme [23]−0.25−1.06−0.02−0.132.621.943.463.790.970.970.810.731.901.521.591.80109413
11CAM V5.1 2-moment 5-class Scheme [24]−0.02−1.42−0.26−0.542.562.183.623.490.970.970.780.731.811.781.581.651181818
13SBU Stony–Brook University Scheme [25]−0.16−0.870.030.022.591.863.403.850.970.970.810.731.861.431.591.862551
14WDM5 Double Moment 5-class Scheme [26]−0.25−1.100.15−0.112.601.973.663.820.970.970.800.731.881.541.701.83911128
16WDM6 Double Moment 6-class Scheme [26]−0.25−1.100.16−0.132.601.973.703.820.970.970.800.731.881.551.711.828121312
17NSSL 2–moment Scheme [14]−0.17−0.83−0.030.122.591.833.363.830.970.980.810.731.861.401.561.9131210
18NSSL 2-moment Scheme with CCN Prediction [14]−0.17−0.83−0.030.112.591.833.363.830.970.980.810.731.861.401.561.914319
19NSSL 1-moment 7-class Scheme−0.31−0.83−0.250.122.621.833.443.830.970.980.810.731.911.401.571.911321511
21NSSL 1-moment 6-class Scheme [27]−0.31−1.13−0.25−0.062.621.993.473.940.970.970.810.711.911.571.581.861414163
28Aerosol-aware Thompson Scheme [28]−0.25−0.84−0.020.172.601.843.463.950.970.980.810.721.891.411.581.95114315
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stergiou, I.; Tagaris, E.; Sotiropoulou, R.-E.P. Sensitivity Assessment of WRF Parameterizations over Europe. Proceedings 2017, 1, 119. https://doi.org/10.3390/ecas2017-04138

AMA Style

Stergiou I, Tagaris E, Sotiropoulou R-EP. Sensitivity Assessment of WRF Parameterizations over Europe. Proceedings. 2017; 1(5):119. https://doi.org/10.3390/ecas2017-04138

Chicago/Turabian Style

Stergiou, Ioannis, Efthimios Tagaris, and Rafaella-Eleni P. Sotiropoulou. 2017. "Sensitivity Assessment of WRF Parameterizations over Europe" Proceedings 1, no. 5: 119. https://doi.org/10.3390/ecas2017-04138

Article Metrics

Back to TopTop