*3.1. The Wind Distribution Results*

Figure 6 is the Weibull distribution of seven towers at 10, 50, and 70 m. Tower 5 was missing most of its data at 50 m and all its data at 70 m. Therefore, the distribution of Tower 5 at 50 and 70 m was not analyzed. Table 8 is the shape parameter and the scale parameter of each Weibull distribution in Figure 6. In all of the subfigures of Figure 6, we can see that the wind speed distribution of the red lines is generally smaller than the wind speed distribution of the green lines. This means that the wind speed of Test 2 is smaller than the wind speed of Test 1. Compared to the peak position of wind speed distribution and the peak value of the distribution, except in several cases (Tower 1 is 10 m; Tower 3 is 10 m; Tower 5 is 10 m; and Tower 6 is 10, 30, and 70 m), in most cases, the peak value of Test 2 is closer to the observation than Test 1. Furthermore, for the peak value of wind speed distribution, we can also find that, except in four cases (Tower 1 is 70 m, Tower 2 is 70 m, Tower 3 is 70 m, and Tower 6 is 70 m), the peak values of Test 2 are closer to the observations than Test 1. Table 8 shows that the shape and scale parameters of Test 1 are larger than those of Test 2 and Test 3; Test 1's simulation performance is worse than Test 2's and Test 3's, mainly due to the systematic higher simulation of the wind speed.


**Table 8.** Shape and scale parameters of the Weibull distributions.

Remote Sens. 2020, 12, 973 13 of 30

Figure 6. The Weibull distribution of seven towers at different heights. The blue lines are the observed wind speeds, the green lines are the results of Test 1 (no data assimilation), and the red lines are the results of Test 2 (data assimilation). **Figure 6.** The Weibull distribution of seven towers at different heights. The blue lines are the observed wind speeds, the green lines are the results of Test 1 (no data assimilation), and the red lines are the results of Test 2 (data assimilation).

#### *3.2. The RMSE, IA, and R Results* 3.2. The RMSE, IA, and R Results

The Tables A1–A3 in Appendix A are the RMSE, IA, and R results of the three tests (Test 1, Test 2, and Test 3) at heights of 10, 50, and 70 m. In order to analyze the distributions in different months, the results of each wind tower were calculated separately in each month, from January to December. The vacant positions in the tables mean that there were no valid wind speed observations in that month, so indices were not calculated. The Table A1, Table A2 and Table A3 in Appendix A are the RMSE, IA, and R results of the three tests (Test 1, Test 2, and Test 3) at heights of 10, 50, and 70 m. In order to analyze the distributions in different months, the results of each wind tower were calculated separately in each month, from January to December. The vacant positions in the tables mean that there were no valid wind speed observations in that month, so indices were not calculated.

Remote Sens. 2020, 12, 973 14 of 30

For the distributions of wind direction, we plot the wind rose in each month, using the wind speed and wind direction of Test 2. The Figures A1–A4 in Appendix B are the wind rose at different heights in the four seasons of spring, summer, autumn, and winter. For the distributions of wind direction, we plot the wind rose in each month, using the wind speed and wind direction of Test 2. The Figure B1, Figure B2, Figure B3 and Figure B4 in Appendix B are the wind rose at different heights in the four seasons of spring, summer, autumn, and winter.

In the results of the correlation coefficients (R) in Tables A1 and A2, Test 1 had two records that did not pass the significance test (10 m Tower 6 Jul. and 50 m Tower 5 Jan.). This is because (1) the amount of data was too small (790 and 63) and (2) the correlation coefficient was too small. The other correlation coefficients all passed the significance test, with more than 99% confidence. In the results of the correlation coefficients (R) in Table A1 and Table A2, Test 1 had two records that did not pass the significance test (10 m Tower 6 Jul. and 50 m Tower 5 Jan.). This is because (1) the amount of data was too small (790 and 63) and (2) the correlation coefficient was too small. The

After calculating the average value of the results for the different towers, we obtained the average distribution of RMSE, IA, and R. Figure 7 shows the seven towers' average results of RMSE, IA, and R of Tests 1, 2, and 3, at 10, 50, and 70 m. As can be seen from Figure 7, the results of Test 2 and Test 3 are better than that of Test 1 on all three indices. Both conventional and satellite data can improve wind speed simulation. However, compared with Test 2, Test 3 has small improvements in RMSE, IA, and R. Compared with the conventional data, satellite data have a wider geographical coverage, and the improvement of wind speed simulation is more significant. other correlation coefficients all passed the significance test, with more than 99% confidence. After calculating the average value of the results for the different towers, we obtained the average distribution of RMSE, IA, and R. Figure 7 shows the seven towers' average results of RMSE, IA, and R of Tests 1, 2, and 3, at 10, 50, and 70 m. As can be seen from Figure 7, the results of Test 2 and Test 3 are better than that of Test 1 on all three indices. Both conventional and satellite data can improve wind speed simulation. However, compared with Test 2, Test 3 has small improvements in RMSE, IA, and R. Compared with the conventional data, satellite data have a wider geographical coverage, and the improvement of wind speed simulation is more significant.

Figure 7. Average of the results of root-mean-square error (RMSE), index of agreement (IA), and **Figure 7.** Average of the results of root-mean-square error (RMSE), index of agreement (IA), and Correlation Coefficient (R) from the seven towers.

#### Correlation Coefficient (R) from the seven towers. *3.3. Wind Speed Simulation Results Analysis*

3.3. Wind Speed Simulation Results Analysis From Table A1, Table A2 and Table A3, we can find that the RMSE of the model simulation results varied greatly in different months. In some towers, the gap between the different months even From Tables A1–A3, we can find that the RMSE of the model simulation results varied greatly in different months. In some towers, the gap between the different months even reached 2 m/s (10 m for Tower 2; 70 m for Tower 1) and, in most cases, there was at least a 0.5 m/s gap. Like RMSE, the R

reached 2 m/s (10 m for Tower 2; 70 m for Tower 1) and, in most cases, there was at least a 0.5 m/s gap. Like RMSE, the R also changed greatly with the month. Furthermore, we also found that RMSE also changed greatly with the month. Furthermore, we also found that RMSE and R change a lot at different heights in some cases (Tower 2 December; Tower 3 December). Remote Sens. 2020, 12, 973 15 of 30 Stensrud et al. [33] compared the MM4 [34] model output with the observation temperature and

Stensrud et al. [33] compared the MM4 [34] model output with the observation temperature and found that there is systematic bias in the NWP model. In order to analyze the systematic bias, we calculated the mean wind speed value of Test 1, Test 2, and observation data in each month, and we obtained the wind speed anomalies by using the following equation: found that there is systematic bias in the NWP model. In order to analyze the systematic bias, we calculated the mean wind speed value of Test 1, Test 2, and observation data in each month, and we obtained the wind speed anomalies by using the following equation: = − , = 1,2,3 … (9)

$$
v\_{\rm ai} = v\_{\rm i} - \overline{v}\_{\rm i} \; i = 1,2,3 \dots \tag{9}$$

where *vai* is the wind speed anomalies, and *v* is the mean values of wind speed in each month. where is the wind speed anomalies, and is the mean values of wind speed in each month. In order to analyze the wind speed simulation results in Test 1 and to find out the performance

In order to analyze the wind speed simulation results in Test 1 and to find out the performance of the WRF model on wind speed simulation, we calculated the RMSE, IA, and R by using the wind speed and wind speed anomalies of Test 1 and observations. Figure 8 shows the results of the average indices from seven towers. We also calculated the average value of the bias of mean speed of Test 1 and Test 2; the results are shown in Figure 9. of the WRF model on wind speed simulation, we calculated the RMSE, IA, and R by using the wind speed and wind speed anomalies of Test 1 and observations. Figure 8 shows the results of the average indices from seven towers. We also calculated the average value of the bias of mean speed of Test 1 and Test 2; the results are shown in Figure 9.

Figure 8. Average results of root-mean-square error (RMSE), index of agreement (IA), and correlation coefficient (R) in Test 1; "10 m", "50 m", "70 m" are the RMSE, IA, and R results of wind speed at 10, **Figure 8.** Average results of root-mean-square error (RMSE), index of agreement (IA), and correlation coefficient (R) in Test 1; "10 m", "50 m", "70 m" are the RMSE, IA, and R results of wind speed at 10, 50, and 70 m, and the "anomalies" histograms are the results of RMSE, IA, and R, calculated using wind speed anomalies.

50, and 70 m, and the "anomalies" histograms are the results of RMSE, IA, and R, calculated using

wind speed anomalies.

Figure 9. Average of the results of the mean speed bias from the seven towers. **Figure 9.** Average of the results of the mean speed bias from the seven towers.

In Figure 8, we can find that, in the same month and at the same height, the IA is similar between wind speed and wind speed anomalies, but sometimes the value of RMSE can be very different. As for RMSE, the difference between wind speed and wind speed anomalies can be caused by systematic bias of the model's simulations. Therefore, when the model has systematic bias, the RMSE gap will become larger. In some months with large RMSE values, the RMSE gap is always large, indicating In Figure 8, we can find that, in the same month and at the same height, the IA is similar between wind speed and wind speed anomalies, but sometimes the value of RMSE can be very different. As for RMSE, the difference between wind speed and wind speed anomalies can be caused by systematic bias of the model's simulations. Therefore, when the model has systematic bias, the RMSE gap will become larger. In some months with large RMSE values, the RMSE gap is always large, indicating that part of the error is caused by systematic bias.

that part of the error is caused by systematic bias. In Figure 8, the RMSE is less than 3 m/s in May, June, July, August, and September, of which the lowest value is in August. Among the IA results, August still reaches the highest value, and the values of April and May are lower than the rest of the months. The results of R are similar to those of IA, having the highest value in August and the lowest values in April and May. From these distributions of the indices, the simulation results in summer are generally the best, and the simulation results in spring are the worst. From the Figure B1 in Appendix B, we can see that the main wind directions in spring are east, south, and southeast, and the wind speed distribution is particularly dense in some directions, namely from ocean to land. We can infer that the poor performance of the spring simulation may be caused by the wind from the ocean. However, in summer (Figure B2), especially In Figure 8, the RMSE is less than 3 m/s in May, June, July, August, and September, of which the lowest value is in August. Among the IA results, August still reaches the highest value, and the values of April and May are lower than the rest of the months. The results of R are similar to those of IA, having the highest value in August and the lowest values in April and May. From these distributions of the indices, the simulation results in summer are generally the best, and the simulation results in spring are the worst. From the Figure A1 in Appendix B, we can see that the main wind directions in spring are east, south, and southeast, and the wind speed distribution is particularly dense in some directions, namely from ocean to land. We can infer that the poor performance of the spring simulation may be caused by the wind from the ocean. However, in summer (Figure A2), especially in July and August, although there are winds in the ocean direction, the wind still distributes in many directions.

in July and August, although there are winds in the ocean direction, the wind still distributes in many directions. From the performance of RMSE, MAE, IA, and R in winter, we can see that, although the RMSE and MAE values are large in winter, IA and R are also large. From the RMSEs of wind speed and wind speed anomalies, we can find that the gap between them in winter is larger, when compared with other months. Figure 9 also shows that, in winter, the bias of mean speed is larger. The results indicate that the wind speed simulation has large systematic bias in winter, but since the IA and R are also large in winter, the wind speed pattern can be simulated well. Additionally, as is seen in From the performance of RMSE, MAE, IA, and R in winter, we can see that, although the RMSE and MAE values are large in winter, IA and R are also large. From the RMSEs of wind speed and wind speed anomalies, we can find that the gap between them in winter is larger, when compared with other months. Figure 9 also shows that, in winter, the bias of mean speed is larger. The results indicate that the wind speed simulation has large systematic bias in winter, but since the IA and R are also large in winter, the wind speed pattern can be simulated well. Additionally, as is seen in Figures 8 and 9, in some other months like March, April, June, July, and November, there also exists considerable systematic bias.

Figure 8 and Figure 9, in some other months like March, April, June, July, and November, there also exists considerable systematic bias. The simulation results at different heights have a smaller change compared to seasonal changes. In winter, the RMSE of 10 m is larger than the RMSE of 50 and 70 m, while it is smaller in other seasons. The R and IA results also show that the 10 m simulation was performed better in winter. From Appendix B, we can see that the wind speed distribution at different heights is basically the The simulation results at different heights have a smaller change compared to seasonal changes. In winter, the RMSE of 10 m is larger than the RMSE of 50 and 70 m, while it is smaller in other seasons. The R and IA results also show that the 10 m simulation was performed better in winter. From Appendix B, we can see that the wind speed distribution at different heights is basically the same, and the wind speed of 10 m is only slightly smaller than that of 50 and 70 m.

#### same, and the wind speed of 10 m is only slightly smaller than that of 50 and 70 m. *3.4. Data Assimilation Results Analysis*

3.4. Data Assimilation Results Analysis From the 10 m results (Appendix A), we can see that almost all of the RMSE values in Test 2 are less than Test 1, except in some cases (Tower 3 Mar., Tower 4 Feb., Tower 7 Mar., and Tower 7 Nov.). Compared with Test 1, most of the decreases in the RMSE values of Test 2 vary from 0 to 0.5 m/s, and in some cases, they can reach 1 m/s. For the 50 m results and the 70 m results, there are still some cases where the RMSE values of Test 2 are larger than that of Test 1 (50 m Tower 3 Jan., 50 m Tower 3 Mar., 50 m Tower 4 Apr., 50 m Tower 7 Jan., 50 m Tower 7 Aug., 70 m Tower 4 Jan., 70 m Tower 7 Jan., and 70 m Tower 7 Feb.), but in most months, the RMSE values of Test 2 are significantly reduced From the 10 m results (Appendix A), we can see that almost all of the RMSE values in Test 2 are less than Test 1, except in some cases (Tower 3 Mar., Tower 4 Feb., Tower 7 Mar., and Tower 7 Nov.). Compared with Test 1, most of the decreases in the RMSE values of Test 2 vary from 0 to 0.5 m/s, and in some cases, they can reach 1 m/s. For the 50 m results and the 70 m results, there are still some cases where the RMSE values of Test 2 are larger than that of Test 1 (50 m Tower 3 Jan., 50 m Tower 3 Mar., 50 m Tower 4 Apr., 50 m Tower 7 Jan., 50 m Tower 7 Aug., 70 m Tower 4 Jan., 70 m Tower 7 Jan., and 70 m Tower 7 Feb.), but in most months, the RMSE values of Test 2 are significantly reduced compared to those of Test 1. Especially in some cases (50 m Tower 2 Mar., 70 m Tower 1 Mar., 70 m

and Test 2.

Tower 1 Apr., 70 m Tower 2 Mar., and 70 m Tower 3 Mar.), Test 2 reduced the RMSE by more than 1 m/s. The significant reduction in RMSE indicates that the bias of wind speed simulation becomes smaller after the data assimilation is used. m/s. The significant reduction in RMSE indicates that the bias of wind speed simulation becomes smaller after the data assimilation is used. The Index of Agreement results of 10, 50, and 70 m (Table A1, Table A2 and Table A3) show that, except in some cases (10 m Tower 7 May, 10 m Tower 7 Jul., 10 m Tower 7 Nov., 50 m Tower 7 Jan.,

Remote Sens. 2020, 12, 973 17 of 30

Tower 1 Apr., 70 m Tower 2 Mar., and 70 m Tower 3 Mar.), Test 2 reduced the RMSE by more than 1

The Index of Agreement results of 10, 50, and 70 m (Tables A1–A3) show that, except in some cases (10 m Tower 7 May, 10 m Tower 7 Jul., 10 m Tower 7 Nov., 50 m Tower 7 Jan., 50 m Tower 7 Feb., and 50 m Tower 7 Mar.), Test 2 has a larger value of IA than Test 1 in the rest of the cases. The increments of IA vary from 0 to 0.2, which is a significant improvement of the wind speed simulation. 50 m Tower 7 Feb., and 50 m Tower 7 Mar.), Test 2 has a larger value of IA than Test 1 in the rest of the cases. The increments of IA vary from 0 to 0.2, which is a significant improvement of the wind speed simulation. In the R results of 10, 50, and 70 m (Table A1, Table A2 and Table A3), it can be found that some

In the R results of 10, 50, and 70 m (Tables A1–A3), it can be found that some cases have a large value difference between Test 1 and Test 2. The increase of R indicates that satellite data assimilation significantly improved the correlation between simulation results and observations. cases have a large value difference between Test 1 and Test 2. The increase of R indicates that satellite data assimilation significantly improved the correlation between simulation results and observations. Compared with Test 1, we calculated the average reduced RMSE, increased IA and increased R

Compared with Test 1, we calculated the average reduced RMSE, increased IA and increased R of Test 2; the results of wind speed and wind speed anomalies are shown in Figure 10. of Test 2; the results of wind speed and wind speed anomalies are shown in Figure 10.

Figure 10. Difference between Test 2 and Test 1 results; "10 m", "50 m", and "70 m" are the reduced root-mean-square error (RMSE), increased index of agreement (IA), and increased correlation coefficient (R) results of wind speed at 10, 50, and 70 m. The "anomalies" histograms are the results **Figure 10.** Difference between Test 2 and Test 1 results; "10 m", "50 m", and "70 m" are the reduced root-mean-square error (RMSE), increased index of agreement (IA), and increased correlation coefficient (R) results of wind speed at 10, 50, and 70 m. The "anomalies" histograms are the results of the reduced RMSE, increased IA, and increased R calculated using wind speed anomalies in Test 1 and Test 2.

of the reduced RMSE, increased IA, and increased R calculated using wind speed anomalies in Test 1

The RMSE results in Figure 10 show that March, April, May, July, and October reduced larger values of RMSE; and November, December, January, February, June reduced less RMSE. The results

The RMSE results in Figure 10 show that March, April, May, July, and October reduced larger values of RMSE; and November, December, January, February, June reduced less RMSE. The results of IA and R are roughly the same as those of RMSE. Data assimilation can significantly improve the wind simulation results in March–May and July–October.

By comparing the results of wind speed and wind speed anomalies, we can find that the reduced RMSE values have gaps between the two results, especially in March, April, and July, when the reduced RMSE values of the anomalies results are smaller than the wind speed results. Moreover, Figure 9 shows that Test 2 has less mean speed bias than Test 1. This means that, in these months, some systematic bias was corrected by data assimilation.

From the Figures A1–A3 in Appendix B, we can find that during March–May and July–October, the main wind directions were south, southeast, and east, while the main wind directions during November–February were north, and the north wind during November–February had high speed and was very stable. Wind in the north direction may be caused by the winter monsoon. The winter monsoon is affected by large-scale circulation, and the effect of data assimilation may be limited.

From Figure 10, we can find that, in March–May and July–October, although the results in Test 1 differ greatly in different months, after data assimilation, the differences in Test 2 become smaller. Figure 10 shows that, compared with Test 1, Test 2 improved a lot in March, April, May, and October. Data assimilation can solve some of the bad cases of simulations in spring and autumn.

For the performance of data assimilation at different heights, we found that, compared with the lower level (10 m), most cases at the higher levels (50 and 70 m) have larger RMSE reductions and increments of IA and R. This result indicates that, through data assimilation, simulation results at higher levels improved more than those at lower levels. It can be seen from Figure 4 that the wind speed of 10 m is smaller than the wind speed of 50 and 70 m, so the error reduction is small. At the same time, the wind speed of 10 m is affected by the terrain, and the data assimilation has a greater effect on 50 and 70 m.

The incremental field can reflect the dynamic adjustments from data assimilation. To investigate the incremental field between Test 1 and Test 2, the MAD was calculated as follows:

$$MAD = \frac{1}{n} \sum\_{i=1}^{n} |\mathbb{S}\_{Test\ 2} - \mathbb{S}\_{Test\ 1}|\tag{10}$$

where *n* is the total grid number in domain 03; *STest* <sup>2</sup> is the U or V wind speed of Test 2; and *STest* <sup>1</sup> is the U or V wind speed of Test 1.

Figure 11 shows the vertical distribution of MAD between Test 1 and Test 2 in 00, 06, 12, and 18 UTC. We could find that, at each time, the MAD increased with the height and reached the maximum value at around 200–300 m height, and then decreased with height. The assimilation of satellite data has effects in the troposphere rather than just improving the near-surface layers. Moreover, the MAD of V component of wind speed is larger than U component, especially in 12 and 18 UTC. In our study case, the V component of wind speed is the direction of sea–land breeze, and it may be because that satellite data assimilation improved the temperature and pressure field and affected the simulation of sea–land breeze.

Figure 11. Vertical distribution of mean absolute difference (MAD) of U (m/s) and V (m/s) between Test 1 and Test 2 in 00 UTC (a) 06 UTC (b) 12 UTC (c), and 18 UTC (d). **Figure 11.** Vertical distribution of mean absolute difference (MAD) of U (m/s) and V (m/s) between Test 1 and Test 2 in 00 UTC (**a**) 06 UTC (**b**) 12 UTC (**c**), and 18 UTC (**d**).

#### **4. Discussion**

4. Discussion In previous research, the simulation of wind speeds in coastal wind farm areas were mainly based on the direct simulation of WRF models with analysis data [27,35–41]. Since the parameters chosen for WRF can greatly affect the wind simulation, many studies have focused on the selection and improvement of the physical parameterization scheme of the WRF model [5,7,8]. However, another key factor affecting the model results is the initial field and real-time update of the model fields generated by the data assimilation system. Due to the uncertainty of wind speed changes near the surface, data assimilation has not been widely used in wind speed simulation. Our work used In previous research, the simulation of wind speeds in coastal wind farm areas were mainly based on the direct simulation of WRF models with analysis data [27,35–41]. Since the parameters chosen for WRF can greatly affect the wind simulation, many studies have focused on the selection and improvement of the physical parameterization scheme of the WRF model [5,7,8]. However, another key factor affecting the model results is the initial field and real-time update of the model fields generated by the data assimilation system. Due to the uncertainty of wind speed changes near the surface, data assimilation has not been widely used in wind speed simulation. Our work used satellite data assimilation and improved the wind speed simulation results.

satellite data assimilation and improved the wind speed simulation results. Wind resource assessment tasks require high quality of both wind speed distribution and time series of wind speed. Our results show that the Weibull distribution of Test 2 is closer to the observation than that of Test 1. Additionally, some statistical results were improved after data Wind resource assessment tasks require high quality of both wind speed distribution and time series of wind speed. Our results show that the Weibull distribution of Test 2 is closer to the observation than that of Test 1. Additionally, some statistical results were improved after data assimilation, indicating that the time series of wind speed can also be more accurate.

assimilation, indicating that the time series of wind speed can also be more accurate. Application of data assimilation technology for wind speed simulation is a new trend in recent years. Our results show that data assimilation in different seasons has great differences in the improvement of wind simulation and the differences depend on the wind condition, and both Application of data assimilation technology for wind speed simulation is a new trend in recent years. Our results show that data assimilation in different seasons has great differences in the improvement of wind simulation and the differences depend on the wind condition, and both systematic bias and random error can be corrected by satellite data assimilation.

systematic bias and random error can be corrected by satellite data assimilation.

#### **5. Conclusions**

In this paper, a one-year wind speed simulation was performed in the wind farm area of Yangjiang. Through the WRF-3DVar system, satellite data assimilation was applied to wind speed simulation in wind resource assessments. The errors and correlations between wind speed and wind speed anomalies in the two tests were compared through three indices—RMSE, IA, and R. Finally, we analyzed the differences of each index in different seasons. The main conclusions are as follows.

The Weibull distribution of Test 2 is closer to the observation than Test 1, and after applying data assimilation, the distribution of wind speed is more accurate.

According to the simulation results of the different seasons, it can be found that the wind simulation in the coastal areas of Guangdong has the best performance in summer and the worst performance in spring. This may be because the spring wind mainly comes from the ocean direction, and in winter and spring, there exist more systematic bias of the WRF model.

Compared to the conventional observations, the satellite data have greater geographic coverage, especially on the sea. The simulation results using satellite data assimilation can reduce the wind speed error and have better agreement with the observation data. Except for winter, the value of RMSE is greatly reduced in the other seasons. Comparing the wind speed and wind speed anomalies results, it can be seen that both the systematic bias and the random error were corrected. The IA and R between simulation results and observations are significantly improved in some months with very low correlations (April and May).

Because conventional observations are mainly distributed in inland synoptic observation stations, the performance of conventional data assimilation is less than the satellite data assimilation.

From the improvements of RMSE, IA, and R with data assimilation, it can be found that with the data assimilation, the performance of wind speed is improved in the spring and autumn, while the improvements are limited in winter. Data assimilation can significantly improve simulations during periods of poor simulation performance. From the wind distribution of the model result, we can find that the wind direction in winter was the same as the winter monsoon, and the systematic bias of the model was large during winter.

The wind speed improvements of data assimilation at the lower level (10 m) were less significant than that at the upper levels (50 and 70 m). This is because the wind near 10 m may be greatly affected by the terrain.

The current methods for wind resource assessment mainly use numerical models to simulate wind speed. Through this work, it can be found that data assimilation can be used to reduce simulation errors (both systematic bias and random errors) and to improve the correlation between simulation results and observations. Furthermore, the combined way of WRF-3Dvar can be applied in wind resource assessment for wind farm location selection and other applications.

**Author Contributions:** Conceptualization, Y.L.; methodology, W.X.; software, W.X.; validation, W.X.; formal analysis, W.X.; investigation, W.X.; resources, Y.L.; data curation, W.X.; writing—original draft preparation, W.X.; writing—review and editing, Y.L. and L.N.; visualization, W.X.; supervision, Y.L. and L.N.; project administration, Y.L.; funding acquisition, Y.L. All authors read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the National Key Research and Development Program of China (2018YFB1502803) and the Scientific Research Program of Tsinghua University "Research on Wind Farm Weather Forecasting Technology for Power Grid".

**Acknowledgments:** The wind observation data of the wind towers were provided by China Huaneng Group Co., Ltd. (CHNG), and the software of the tower data decoding was also provided by CHNG. The authors are very grateful for the observation data and its decoding software provided by CHNG. The FNL data were provided by CISL Research Data Archive (RDA) website (https://rda.ucar.edu/datasets/ds083.2/). The surface and upper-air observation data were also provided by CISL RDA (surface: https://rda.ucar.edu/datasets/ds461.0/ and upper air: https://rda.ucar.edu/datasets/ds351.0/). The authors are also grateful for the provider of these data.

**Conflicts of Interest:** The authors declare no conflicts of interest.

