Next Article in Journal
Test Methods for Validating Inductive Transformer Performance during Cable Discharge: Main Principles, Parameters and Testing Procedures
Previous Article in Journal
Combined MIMO Deep Learning Method for ACOPF with High Wind Power Integration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of a Load Model Validation Framework Applied to Synthetic Turbulent Wind Field Evaluation

Fraunhofer Institute for Wind Energy Systems IWES, 27572 Bremerhaven, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Energies 2024, 17(4), 797; https://doi.org/10.3390/en17040797
Submission received: 22 December 2023 / Revised: 29 January 2024 / Accepted: 5 February 2024 / Published: 7 February 2024
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)

Abstract

:
The validation of aeroelastic load models used for load calculations on wind turbines substantially increases the confidence in the accuracy and correctness of these models. In this contribution, we introduce a framework for the validation of these models, integrating a normalized area metric as an objective, quantifiable validation metric that considers the entire statistical distribution of a model and a benchmark and additionally enables a comparison of model accuracy between sensors of different physical units. The framework is applied to test cases that evaluate varying synthetic turbulent wind fields. Two test cases with a focus on turbulence parameters and abnormal shear conditions based on comprehensive wind measurements at the Testfeld Bremerhaven are used to demonstrate the workflow with four different members using IEC-compliant and measurement-derived wind field parameters, respectively. Along with these measurements, an uncertainty model for synthetic wind fields is introduced to quantify propagated wind measurement uncertainties associated with the measured boundary conditions during a validation campaign. The framework is presented as a straightforward and concise methodology to not only find but also quantify mismatches of load models. Major mismatches are found for wind fields associated with larger uncertainties in the mean wind field due to a reduced spatial resolution of measurements.

1. Introduction

Aeroelastic modeling of wind turbines represents an essential part of the turbine design procedure [1]. The utilization of turbine modeling with low-fidelity aero-servo-elastic simulations facilitates access to an expeditious and cost-efficient assessment of a wind turbine’s structural response under operational conditions faced throughout its designed lifespan. The simulations are based on two essential components: a wind turbine load model, encompassing the aerodynamic and structural representation of the turbine along with its control dynamics, and external exciters, including the atmospheric wind, among others. The turbine load models aim to describe the turbine’s response to the experienced excitation as realistically as possible while being exposed to inherent modeling uncertainties. Inaccurate and biased load models might lead to an incorrect assessment of the produced power or even an insufficient structural and controller design, and consequently to a higher failure rate. To obtain a high level of trust in these models and to identify potential bias, they shall be validated based on in-field measurements by comparing simulation results with measurement results of a validation experiment conducted on an actual turbine [2]. Further, not only the turbine model parametrization but also the wind field modeling process is subjected to certain uncertainties and a limited representativeness of the inputs to this process. The challenges of validation include: capturing accurate boundary conditions, selecting relevant measurements as input, conducting simulations that reproduce the behavior of the system under the measured boundary conditions, and, finally, the assessment and interpretation of the results.
Due to the complexity of the validation and the requirement of numerous measured periods, automating the simulations and subsequent comparison is essential and can be addressed with automatized frameworks. Hills et al. [3] introduced a Validation and Verification (V&V) framework that assists in various stages of the validation process, including the program planning, experiment design, uncertainty quantification, and evaluation of the model. However, this framework does not provide code for the validation of aeroelastic load models or suitable metrics; however, it summarizes the recommended best practices associated with a model validation program and its planning.
Verdonck et al. [4] presented a framework for the uncertainty quantification of aeroelastic wind turbine simulation tools based on surrogate models, using polynomial chaos expansion. However, this framework does not provide a methodology for the comparison of simulation results. Within the IEA Task 30, the comparison between measurements and simulations involved the use of statistical values, along with point-to-point evaluations of selected time series [5,6,7,8]. Söker et al. [9] offer a guideline for the validation of loads on wind turbines based on wind turbine load models. They emphasize that the validation of load models must initially establish the consistency of the exciting environmental conditions, compromising wind speed and turbulence intensity within the 10 min period. They also mention the direct comparison of time series through visual comparison. Zierath et al. [10,11] use the statistical binning method with comparisons of minimum, maximum, and mean values for validation. In addition, the authors use the Rainflow Matrix and visualize load ranges against the number of accumulated cycles. This results in qualitative classifications of different simulation tools. A quantitative evaluation is not presented. Dimitrov and Natarajan [12] also compare simulations with measurements using statistical values and statistical binning. However, for comparison, no validation metric is used.
Suitable validation metrics are required for the appropriate and effective comparison of the numerous simulations and measurements. Various metrics have been introduced in the literature, showcasing their applicability and utility for validation purposes [13,14,15]. Hypothesis testing techniques used in statistical inference can also be applied to model validation [16,17]. For this purpose, a suitable hypothesis test—e.g., a t-test—is used to check whether the two samples (one from measurement and one from simulation) can originate from the same distribution within the limits of the significance level. This method answers whether a model is valid or not but does not provide a quantifiable measure of the models’ accuracy. The method of Bayesian updating is also used in the field of model validation but has an emphasis on calibrating probability distributions of parameters [18,19]. A conceptually simpler approach to computing a validation metric is the comparison of estimated means from both the experiment and the simulation; see also [13]. Nevertheless, an application of a validation framework to the simulation of wind turbine load models with an appropriate metric is, to our knowledge, still missing and shall be introduced with this contribution.
For precise validation, it is essential to closely model the exciters, which serve as boundary conditions, to replicate the excitation encountered during the validation experiments. As the wind field approaching the rotor cannot be fully captured, synthetic wind fields are generated based on statistical parameters derived from wind measurements or guidelines, describing both turbulent fluctuations and the mean wind flow [1]. Nevertheless, a discrepancy between the modeled and the actual experienced wind field due to the limited spatial distribution and information of wind measurements may result in a divergent excitation in the simulation, potentially leading to a misinterpretation of the load model and its reliability. Hence, the precise capturing and subsequent modeling of the wind field is of high importance.
Industry-accepted guidelines recommend varying spectral turbulence models with empirically derived parameters [1]. However, with the ever-increasing size of wind turbines, these models and parameters lose their validity [20,21]. Further developments of the existing and new models to increase the representativeness of the atmospheric conditions as well as statistical behavior that is not covered by the standards is part of current research [22,23,24,25]. Additionally, methodologies for the incorporation of time series measurements of the wind fluctuations within the wind field through, e.g., 3D ultrasonic anemometers or Light Detection and Ranging (LiDAR) devices were developed for higher agreement of the wind fields with the actual experienced fields [12,26,27,28,29,30]. The availability of these different models and parameters raises the question of which models should be preferred for the load model validation process. Additionally, depending on the corresponding parameter the derivation of the wind field parameters can be complex but might have a considerable impact on the evaluated simulation results. Therefore, we shall evaluate which turbulence model should be used and which wind field parameters are necessary to be captured with which effort to model the synthetic wind fields that reach the required accuracy for a load model validation. Additionally, the random seeds that are used for the realization of the statistical wind fields as well as uncertainty in the parameters introduce an inherent uncertainty in the synthetic wind fields and, consequently, in the simulation results. These uncertainties must be considered when evaluating the load models and the influence of the varying synthetic wind fields.
In this contribution, we want to investigate how wind field models and turbine parametrization can be compared against each other most efficiently. Therefore, the first objective aims to develop a framework for the validation of turbine load models, along with a dedicated validation metric. This metric can be used to condense the comprehensive and complex information of the turbine’s response into compact, relatable information. Secondly, differing synthetic wind fields based on varying turbulence models and wind field parameters shall be evaluated and compared through the developed validation framework. Here, two test cases are chosen to demonstrate the capabilities and limitations of the framework and its metric in the evaluation of varying synthetic wind fields. Lastly, an uncertainty quantification model for these wind fields, associated with measurement uncertainties as well as the statistics-based random generation of time series, is presented. These additional variations in the synthetic wind fields are again related to the spread of the statistical distribution of simulated results within a validation approach and indicate the uncertainty due to the random seed within it.
The remainder of the contribution is structured as follows. In Section 2, the methodology is introduced, including the proposed load model validation framework with its metric along with the synthetic wind field generation workflow and the associated uncertainty model. Two distinct test cases for the demonstrative application of the framework are presented, followed by their results in Section 3. Lastly, the discussion of the findings is given in Section 4, and conclusions are drawn in Section 5.

2. Methods

This section describes the proposed validation methodology based on a load model validation framework with a novel validation metric in Section 2.1, load simulations and the workflow of synthetic wind field generation in Section 2.3, followed by the formulation of the test cases in this contribution in Section 2.5.

2.1. Validation Methodology

To trust simulation results based on models, it is essential that these models are validated. This also applies to aeroelastic models used for load calculation [2]—in the following, these are referred to as “load models”. For validation, measurements of the model’s sensors of interest (SoI) and the corresponding boundary conditions during the validation experiments are required. In addition, simulations for the SoIs based on the boundary conditions recorded in the validation experiments are required. This allows the quality of the model to be determined with a quantitative comparison of the simulation output with the measurement output. ASME V&V 10 [31] defines validation as “the process of determining the degree to which the model is an accurate representation of corresponding physical experiments from the perspective of the intended uses of the model”. Figure 3.3-1 of [31] illustrates the V&V process. There, the quantitative comparison between simulated and experimental outputs is outlined as assessment activity validation. The validation framework presented in this focuses on this aspect.
Thus, it compares simulation results with benchmark results. In the sense of validation, the benchmarks are measurement results. In this work, we use the validation framework to compare the effect of different wind fields on the loads with each other. Therefore, the benchmarks are the simulation results based on a benchmark wind field.

2.1.1. Load Model Validation Framework

The basic outline of the load validation framework presented in this contribution is shown in Figure 1. Input to the simulations carried out with the aeroelastic model are the boundary conditions, which are represented by synthetic wind fields. These wind fields are created based on input parameters extracted from turbulence measurements and site assessment datasets or based on IEC-recommended values, respectively [1]. This wind field generation process will be described in detail in Section 2.3.
The output of the aeroelastic model simulations are time series of loads, deformations, and other wind turbine states such as pitch angle or electrical power. From these outputs, the desired sensors for evaluating the model behavior are selected. However, to assess the accuracy of a load model, the time series of each sensor is condensed into a number. This is performed by applying an appropriate response metric. Thus, for each simulation or measured period and each sensor of interest (SoI), we receive one System Response Quantity (SRQ). Finally, to assess the accuracy of the model, the distributions of the simulated SRQ and the measured SRQ are compared using the validation metric. To compare the effect of different wind models on the simulated loads, the resulting SRQs generated by the simulation of different wind fields are compared and evaluated in the following.
This work is based on a framework for the automatic execution of load calculation simulations, which was developed at Fraunhofer IWES [32] and has now been extended by the proposed validation framework for the evaluation and validation of load calculation results. This validation framework is also implemented in Python and explained in more detail in the following.

2.1.2. Sensors of Interest, Response Metric, and System Response Quantity

To compare the influence of different wind fields on the simulated loads, the procedure starts with conducting one simulation for each input wind field. Subsequently, the SoI is selected as a time series of a specific load channel from the simulation results. These SoI can be, e.g., electrical power, blade root bending moments, or tower top deflections. To demonstrate the application of the load validation framework, we will limit ourselves in this contribution to some of the most relevant sensors for the design and validation of load models based on [1,2] without claiming completeness. The considered sensors are given in Table 1.
To assess the ability of different tools to reproduce physical effects, time series have to be condensed into more comparable metrics. This is especially necessary when many simulation results based on many wind fields of different seeds are to be examined. Robertson et al. [33] introduced the concept of response metrics as statistical measures that can be calculated from the simulated or measured time series results of the SoIs such as mean, standard deviation, or extreme values. In this contribution, we use mean and standard deviation, which are frequently used statistical properties, as well as max and min and short-term damage equivalent loads (stDEL), which are used for extreme and fatigue load analysis, respectively [1]. After a response metric is applied to an SoI, we receive the System Response Quantity (SRQ), which is the input to the validation metric.
In this contribution, we compare the effect of wind fields on load simulation results by looking at different SoIs and response metrics. The investigated measures are listed in Table 1 and Table 2. For the blade’s root bending moment (rbm), we will focus on blade one only, assuming equal behavior for all three blades.

2.1.3. Validation Metric

A well-defined measure of the discrepancy between predicted and measured data is necessary to assess the quality of a model. This measure is called validation metric, and lower values indicate a good match of the model to the measured data with low values. Following Oberkampf and Barone [13], a validation metric should feature certain characteristics, such as objectiveness, and the representation of differences in the full distribution of the predictions and the actual data. Oberkampf and Roy [14] add the recommended feature that a validation metric should be a true metric in the mathematical sense. Thus, the validation metric should be sensitive to more aspects than just the mean and standard deviation and should also be non-negative. As a validation metric that fulfills these requirements, Ferson et al. [15] propose the area metric. Unlike some other metrics that focus only on means or variances, the area metric considers the entire statistical distribution of both the benchmark data and the predictions. This means it is capable of detecting differences across the entire distribution. The area metric can also be used to compare different sample sizes if, for example, there are fewer measurements than simulations. However, since the area metric is based on distribution, information about the order of occurrence of SRQ in a sample is not considered. In the case that the benchmark and simulated data occur as paired samples, this might be a limitation—one that the comparison method of estimated means also has, for example. In the presented load model validation framework, we use this area metric as a validation metric, which is further specified in the following.
A cumulative distribution function can be used to characterize a sample of model predictions or empirical observations. In Figure 2, exemplary distribution functions from two models, S 1 ( x ) , S 2 ( x ) , and a benchmark, B ( x ) , are shown. In our case, the sample data points are the SRQ values, which are represented by x. In the sense of empirical observations, this is also called the empirical cumulative distribution function or just the empirical distribution function (EDF). The area metric A M is the area between the EDF of the benchmark data B and the cumulative distribution function of the model predictions S; see the striped or dotted area in Figure 2:
A M ( B , S ) = | S ( x ) B ( x ) | d x .
The area metric unit is consistent with the unit of the used SoI, such as N m for blade rbm. To compare the validation results using different SoIs and different response metrics, we normalize the area metric results. To obtain a unified evaluation criterion that allows comparison between multiple parameters of different scales for the same product, Suo et al. [34] normalize the area metric result by the integral from the benchmark EDF over the interval from μ 3 σ to μ + 3 σ with μ and σ as the mean and standard deviation of the benchmark data. As the reference area to normalize to ( R A N ), they obtain the area below the EDF:
R A N Suo ( B ) = μ 3 σ μ + 3 σ B ( x ) d x .
However, using this area metric, the different data points in the EDF are weighted differently: for example, the SRQ value at P 0.1 is weighted with 0.1 , while the SRQ value at P 0.9 is weighted with 0.9 . We define the R A N as the area between the value of the median m e d ( x ) and the discretized benchmark EDF B ( x ) , as shown in gray in Figure 2.
R A N ( B ) = 1 n i = 1 n | B ( x i ) m e d ( B ( x ) ) | .
This area weights the distance of each data point to the median equally and represents the spread within the benchmark EDF. Since the median has the property of minimizing the sum of the distances, we automatically obtain a minimized area, e.g., in comparison to the mean. By minimizing the RAN, we therefore ensure that we have a uniform procedure for normalization. We obtain the normalized area metric N A M :
N A M ( B , S ) = A M ( B , S ) R A N ( B ) .
The normalized area metric N A M thus relates the uncertainty arising from the model predictions in reference to the benchmark data to the uncertainty within the benchmark data themselves. In the application of the framework, this means that N A M relates the uncertainty of the wind model under test in reference to the benchmark model to the uncertainty within the benchmark model due to the number of seeds.
In this contribution, we compare the effect of different synthetic wind fields on the load simulation results but do not compare them to experimental data of a real existing physical system. We are aware that the term validation metric is irritating in this context, as we do not execute validation. However, since the load model validation framework will be used in the future to compare models with measured data from validation experiments, we consciously use the term validation metric. Since the resolution of the probabilistic prediction is discretized by a limited number of simulations, we also use the term EDF for the prediction provided by the model.

2.2. Aeroelastic Simulations

The load simulation results used in this work are generated by simulating a model of an 8MW wind turbine in the aeroelastic simulation tool MoWiT, which has already been used for validation in [35]. As described in [36], MoWiT contains physical models for aerodynamics, structural dynamics, hydrodynamics, and control and solves them in the time domain. This computational model was developed by Fraunhofer IWES and is primarily used for load analysis of offshore and floating [37] wind turbines, as well as for automated simulation and optimization [32]. Further, MoWiT has been in productive operation as a virtual rotor [38] for Hardware-in-Loop (HiL) applications in the Dynamic Nacelle Testing Laboratory (DyNaLab) [39] for several years now.

2.3. Synthetic Wind Field Generation

Aeroelastic simulations of wind turbines require as input the wind vector u with the components ( u 1 , u 2 , u 3 ) = ( u , v , w ) on a grid with the coordinates C = ( C 1 , C 2 , C 3 ) = ( x , y , z ) . Using Taylor’s frozen hypothesis [40], the x coordinate is normally translated into the time domain using t = x / u , where u is the mean flow speed at a reference position, which is often the wind speed at the hub position. As the inflow approaching the turbine cannot be fully measured, synthetic wind field modeling is necessary to incorporate wind field parameters derived from measurements. Therefore, often the Reynolds decomposition is applied, separating the wind field fluctuations into a time-averaged mean field and turbulent, zero mean fluctuations:
u v w ( t , y , z ) = u v w ( y , z ) + u v w ( t , y , z ) ,
where · denotes a mean, u is aligned with the mean wind direction, and v = w = 0 . The vertical variation of the mean wind speed with a height z is often modeled by the empirical power law with the exponent α :
u ( z ) = u ref z z ref α ,
where the horizontal shear is neglected. While this model covers most atmospheric conditions quite well at heights of modern wind turbines, it lacks representativity of more complex profiles, such as low-level jets (LLJs), which requires measurements of the shear profile u ( z ) over greater heights [41,42]. The vertical shear profile can be easily measured through a tall meteorological (met) mast with sensors on various levels or a vertical-profiling Light Detection and Ranging (LiDAR) device.
Turbulence characteristics, on the other hand, require a more complex site-specific characterization. Due to the stochastic nature of turbulence, the random wind velocity fluctuations in longitudinal, lateral, and vertical directions ( u , v , w ) are generated based on synthetic models for the whole wind field. These synthetic turbulence models, however, can differ in their abilities to model certain aspects of atmospheric turbulence. The turbulence models and underlying principles are described briefly in Section 2.3.1, followed by the turbulence-scaling approaches in Section 2.3.2, and an uncertainty model for synthetic wind fields in Section 2.3.3.

2.3.1. Turbulence Generation

The IEC 61400-1:2019 wind turbine design standard [1] suggests two different turbulence models to generate random fluctuations: the Kaimal spectral model with exponential coherence [43,44] (KSEC in the following) and the Mann uniform shear turbulence model [45,46] (Mann in the following). Both models require a uniformly distributed random seed for the statistical generation of random time series. They both assume homogeneity of second-order statistics in the y-z plane and use an IEC-defined integral turbulence length scale Λ 1 , which is fixed to a value of 42 m for hub heights ≥ 60 m [1]. The models are described briefly in the following; detailed information can be found in the outlined references.
The KSEC model requires a single-point power spectral density, defined by the Kaimal spectrum, and an exponential coherence function. The two-sided Kaimal spectrum in the wave number domain assuming Taylor’s frozen hypothesis with k 1 = 2 π f / u is defined as [1]:
k 1 F i ( k 1 ) σ i 2 = k 1 L i / π ( 1 + 3 k 1 L i / π ) 5 / 3
with σ i / σ 1 = [ 1 , 0.8 , 0.5 ] ; L i / Λ 1 = [ 8.1 , 2.7 , 0.66 ] for i = ( 1 , 2 , 3 ) ,
where L i is the integral scale parameter and σ i the standard deviation of the respective velocity component i. Note that, here, we refer to the definitions of the IEC guideline, which somewhat differ from the original description of the Kaimal spectral model [1]. Time series realizations of the spectrum can then be obtained by inverse Fourier transformation. Additionally, to account for spatial correlation, the longitudinal component inhabits correlated phases defined by the exponential coherence function:
C o h ( r 23 , f ) = exp 12 ( f r 23 / u ) 2 + ( 0.12 r 23 / L c ) 2 ,
where L c = 8.1 Λ 1 is the coherence scale parameter and r 23 is the separation vector between two points projected onto the 23-plane [1]. Here, we use the open-source tools TurbSim V2.0 [47] and PyConTurb V2.7.3 [26,27] to generate wind fields based on the KSEC methodology. The former is used for IEC-compliant wind fields, whereas the latter can flexibly be used with parameters and functionalities deviating from the guideline.
Contrary to the KSEC model, the Mann model does not create y z -slices of the wind vector but models a turbulent box, generating turbulent fluctuations as a function of their longitudinal spacing x, which is then transferred into the time domain. The Mann model defines the covariance tensor R i j and its Fourier transform, the spectral tensor Φ i j ( k ) [46]:
R i j ( r ) = u i ( C ) u j ( C + r ) ,
Φ i j ( k ) = 1 ( 2 π ) 3 R i j ( r ) exp ( i k · r ) d r ,
which solely depends on the spatial separation vector r = ( r 1 , r 2 , r 3 ) . The energy spectrum within the spectral tensor is then modeled by the isotropic von Kármán spectrum, which is modified to incorporate the lifetime of eddies and anisotropy through linear vertical shear deformation. Therewith, the Mann uniform shear model is parametrized by a length scale L M , a shear distortion or anisotropy parameter Γ , and the dissipation factor α ϵ 2 / 3 . The tensor is hard to measure directly. Consequently, the cross-spectra, defined as
χ i j ( k 1 , Δ y , Δ z ) = Φ i j ( k ) exp ( i ( k 2 Δ y + k 3 Δ z ) ) d k 2 d k 3
are measured and evaluated. The two-sided one-point spectrum in the wave number domain is then defined as F i ( k 1 ) = χ i j ( k 1 , 0 , 0 ) and is obtained through numerical integration. The Mann parameters also determine the coherence of the simulated turbulent field. A more detailed description of the Mann model can be found in [1,45,46]. Fitting the Mann model to the Kaimal model results in a shear parameter Γ = 3.9, and a length scale of L M = 0.8 Λ 1 , as recommended by the IEC guideline [1]. Here, we use the stand-alone Mann turbulence generator for the generation of the turbulence boxes [48].
The two introduced models are further used and developed to, e.g., model additional atmospheric conditions [22] or intermittent wind fields [24]. They have been evaluated thoroughly and their influence on simulated turbine loads has been assessed in various studies, e.g., Refs. [49,50]. Differences in resulting simulated loads were mainly traced back to different formulations of the co-coherence of the different models. Moreover, the impact of the Mann turbulence parameters has been analyzed in [20], showing that the variation can influence wind turbine loads caused by the change of coherence for low wave numbers. Adjustment of the parameters to measurements has been demonstrated in [20,51,52]. In this study, however, we will use the introduced models with adjusted turbulence parameters only to demonstrate the application of the proposed load validation workflow.

2.3.2. Turbulence Scaling

The aforementioned models require at least one parameter to parametrize the amplitude of the wind fluctuations. The targeted amplitude of the fluctuations is often parameterized by the turbulence intensity ( T I = σ u / u ) [1], which shall be recovered by the generated turbulent wind fields. However, due to numerical reasons, such as spectral discretization, the required statistics cannot be exactly obtained during the initial generation of the fields. Therefore, often the generated fields are scaled to a target variance. Commonly, the statistics are given and requested at a certain point, e.g., the hub height when measured by a cup anemometer. A scaling factor can then be calculated as the ratio of standard deviations from the target to the synthetic field [53]:
S F i = σ i , target σ i , field .
This scaling factor is then multiplied by the generated wind fluctuation, which in principle scales the whole power spectra [54]. TurbSim allows for two scaling options [47]: scaling with a scaling factor for each component i respectively, and scaling each component at each grid point individually. The latter, however, impacts the resulting coherence between points, which is undesired and thus neglected in the following. For the Mann model, the required scaling factors can be significantly larger, as the direct relation between α ϵ 2 / 3 and σ u is not straightforward, which makes the use of Equation (13) necessary to obtain the required statistics [53]. The authors of [54] introduced a method to determine the α ϵ 2 / 3 / T I relation for an exactly defined grid box to obtain similar spectra in the inertial subrange to improve convergence and spread of results. In the remainder, we scale the turbulent fields with the hub height position.
However, if the scaling is performed with respect to a certain point on the grid, e.g., the hub position, the requested statistics are only recovered at this single point. Hence, this introduces uncertainties in the wind fluctuations on the remaining grid points. These uncertainties along with input uncertainties propagated through the wind field generation workflow are evaluated in the next subsection.

2.3.3. Determination of Synthetic Wind Field Uncertainties

The determination of uncertainties in synthetic wind fields is not simple due to the incorporation of a random seed for turbulence generation. The following gives an approach to the determination of additional statistical variations in the turbulent wind fields, considering the wind field generation workflow as a black box, which propagates uncertainties from measurements. We will focus only on the longitudinal velocity component u, but the methodology can be easily extended to v , w .
If the mean wind field is simply determined by the power law (Equation (6)), the uncertainties in the modeled mean wind field can be found using an analytical uncertainty model from the uncertainties in the measurement of the mean wind speed at the reference height u u ref , uncertainties in the reference height itself z ref , and those in the estimation of the power law exponent u α with u z = 0 as in [55]:
u u ( z ) 2 = u ( z ) u ref u u ref 2 + u ( z ) α u α 2 + u ( z ) z ref u z ref 2 .
If input quantities are correlated, their correlation r must be considered. However, from the measurements introduced in Section 2.4, we find that r ( u ref , α ) 0.015 , which is not significantly correlated and thus not considered in this uncertainty model.
The uncertainty in the wind speed u u ref is defined by the uncertainties in the measurement through the sensors, such as a cup anemometer, for instance. Their uncertainty again is determined by calibration or mounting uncertainties, among others. The uncertainty in the power law exponent depends on the vertical distribution of sensors and their measurement uncertainty. In the following, the reference wind speed and power law exponent uncertainty u u ref , u α are assumed to be 0.1   m   s 1 and 0.01, respectively. The uncertainty of the reference height caused by the uncertain determination of the corresponding height of the reference measurement is estimated to be 0.1   m for the installation of cup anemometers. Due to the vertically expanding probe volume of a vertically profiling LiDAR, the reference height uncertainty may be higher for this measurement principle. For the reference height, we will use the hub height of the research turbine; the mean wind speed u is 8.07   m   s 1 , as used in the test cases. It is important to note that we do not claim general validity for these values but provide a possible set of parameters as an initial reference point.
The transfer function of the turbulent wind field generation, on the other hand, is not known due to the seed number, which is randomly distributed. Therefore, the determination of the uncertainties in the generated turbulence is not possible analytically. Together with the scaling of the turbulence, an additional variation in the synthetic wind fields is generated. The total combined variation of the longitudinal velocity component σ u , c 2 ( y , z ) , including uncertainties in the mean wind field u u ( z ) ; the target variance of the turbulence σ u 2 ; uncertainties in the target variance u σ u ; turbulence parameters’ uncertainties u t p 2 ; and scaling and seed uncertainties can then be determined with:
σ u , c 2 ( y , z ) = u u ( z ) 2 + σ turb 2 ( y , z ) , where
σ turb 2 ( y , z ) = σ u 2 + u σ u 2 + u scaling 2 ( y , z ) + u seed 2 + u t p 2 .
Except for u u ( z ) , all contributors are created during the turbulence generation and scaling process. Therefore, we separate the mean and turbulent field in the following analysis.
Comparably low computational costs of the synthetic turbulence generation allow Monte Carlo simulations (MC) to evaluate the variations caused by the scaling and propagation of uncertainties following [56]. We generated 10,000 simulations with a fixed set of IEC recommended Mann parameters and a α ϵ 2 / 3 = 0.1 on a (6000 × 189.5 × 189.5) m grid using the code from [57] due to its higher computing speed. Afterwards, the wind fields are scaled at the center position to a target variance of one. Here, we do not want to focus on the different wind field models but, rather, on the uncertainty quantification method itself. Therefore, we keep the parameters required for the turbulence model as fixed values without uncertainty; solely the random seed number is varied. The variations introduced through the scaling and various seeds u scaling 2 ( y , z ) + u seed 2 can then be determined from a set of simulations by solving Equation (16). The results from the application of the uncertainty model will be shown in Section 3.1 and discussed in Section 4.1.

2.4. Wind Measurements at the Testfeld BHV

As introduced before, wind parameters are required as input to the framework to evaluate the influence of different wind inputs and the various wind field models. Both the parameters for the mean wind field and also the turbulent field are required for the generation of the synthetic wind fields, which can be based on parameters given by the IEC guidelines. These numbers, however, are kept general to cover all sites and are not specific to the individual site with individual atmospheric conditions and situations that are evaluated in a load validation. Therefore, wind measurements are of significance to describe the atmospheric conditions accurately and to reduce uncertainties in the following validation. In this contribution, we use the comprehensive dataset of wind measurements from the Testfeld Bremerhaven (BHV) around the AD8-180 research wind turbine, installed on a former airport in the northwest of Germany, shown in Figure 3.
The test site is characterized by an onshore climatology with flat terrain for southwesterly winds, whereas easterly wind directions are characterized by highly turbulent conditions over rural terrain. Various wind measurements, both remote-sensing and in situ sensors, were installed during a campaign of over 4 years to capture and characterize the wind conditions as the main exciter to the research wind turbine [58,59,60]. An IEC-compliant meteorological (met) mast, covering multiple heights up to the hub height with wind vanes, cup, and 3D ultrasonic anemometers was installed, providing access to high-frequency turbulence characteristics, along with a vertically profiling LiDAR of the type WindCube V2, measuring wind speed and direction at 20 heights from 40 m to 290 m. The vertically profiling LiDAR was installed next to the met mast; both were located at a horizontal distance of 400 m (≈2.2 D) in a direction of 189° from the turbine. The heights of the installed sensors and measurements above ground level (aGL) at the met mast are indicated in Figure 3.
In this contribution, we use the accurate and spatially highly resolved wind speed measurements of the vertical profiler with the high-frequency three-dimensional turbulence measurements of the ultrasonic anemometers of type Gill Windmaster from the met mast, measuring three wind velocity components at a sampling frequency of 20 Hz .

2.5. Test Cases Formulation

Having introduced the load validation framework in Section 2.1.1, we want to demonstrate its functionality, potential, and limitations using two distinct test cases with a different focus in each. We do not claim the completeness of test cases but provide two relevant cases for the evaluation of the framework. Except for certain selected parameters, the wind field parameters are kept the same between the test cases. This way, we can examine the impact of individual wind field parameters on the simulated loads through the load validation framework.
The main synthetic wind field parameters and their origin for both test cases are given in Table 3.
The first test case focuses on the impact of varying turbulence parameters that can be extracted from measurements or adopted from the IEC 61400-1 guideline and corresponding turbulence models. The second test case underlines the difference in modeled loads based on varying levels of information about the measured vertical wind speed profile. Both test cases consist of four members, respectively, each representing a unique set of input parameters.
Synthetic wind fields are generated using 50 random seeds for each individual member in the test cases, resulting in 800 wind fields that are then simulated. The set of seeds was randomly generated once and was kept the same for all members in a test case. Time series with a total length of 600 s on a y-z grid with 64 × 64 grid points, covering a plane of (200 × 200) m at a frequency of 2 Hz have been generated.

2.5.1. Test Case 1: Variation of Turbulence Generation

The set of spectral turbulence parameters ( t p ) and the corresponding model used for the generation of synthetic turbulent wind fields may be extracted from the IEC guideline [1]. The suggested parameters of the KSEC were empirically derived from the Kansas experiment in 1968 with sensors installed at a 32 m high mast [43]; the Mann parameters were derived from a fit to this model. However, the heights of modern wind turbines are substantially larger, and the spectral parameters can deviate remarkably from the IEC parameters, as shown by [21] (their Figure 17), especially for neutral and unstable atmospheric stability conditions. In this test case, we want to evaluate the different models and turbulence parameters from measurements with the IEC-compliant wind field models.
Therefore, we want to extract the turbulence parameters characterizing the turbulent conditions at the Testfeld BHV site from velocity spectra obtained through high-frequency measurements of the sonic anemometers. As the models were initially made to model atmospheric boundary-layer turbulence with neutral stratification, we filter our dataset, ranging from January 2021 to September 2022, for situations with neutral conditions employing the Obhukov length L O , which is computed as:
L O = T s u 3 κ g w T s with u = ( u w 2 + v w 2 ) 1 / 4 ,
where u is the friction velocity, g is the Earth’s gravitational acceleration, T s is the sonic anemometer temperature, and κ is the von Kármán constant (here = 0.4). Only situations with neutral atmospheric stability, defined by 0.05 z / L O < 0.05 following [21] and based on the lowest sonic anemometer statistics, are further considered. The 10-minute spectra from the 110 m sonic anemometer for periods u 3 m / s and the wind direction from a sector of [190°, 250°] were calculated after linear detrending of the time series. We remove spectra with less than 90% of data availability, resulting in 2120 spectra that were averaged and binned in logarithmic equidistant bins in the wave number domain. The bins with less than 7% of data points have been ignored.
The set of turbulence parameters t p of the corresponding model is then obtained through a least-squares minimization of a summed error between modeled spectra F i ( k 1 ) and ensemble-averaged spectra from measurements F i ( k 1 ) in the longitudinal, lateral, and vertical directions ( i = 1 , 2 , 3 ) [12,21,51]:
t p = arg min t p i = 1 3 F i ( k 1 ) F i ( k 1 , t p ) 2 .
For the Mann model, the fit results in the three parameters L , Γ , α ϵ 2 / 3 , whereas the KSEC model requires a variance and length scale for each component; see Equation (7). Fits were performed for both models for wave numbers of 0.001 m 1 to 1 m 1 . IEC-recommended parameters and the fitting results are shown for both models in Table 4. α ϵ 2 / 3 and σ 1 are not shown as they are scaled correspondingly; see Section 2.3.2. The measured spectra F u , F v , F w , Mann IEC, and fitted spectra are shown in Figure 4, paired with an exemplary time series of the longitudinal velocity component u for the IEC and fitted parameters of the Mann model with the same random seed.
With the two sets of spectral parameters from IEC and measurements (meas) and two different models, we have four members to be compared against each other, namely Mann IEC, Mann meas, KSEC IEC, and KSEC meas. In the following, we will use the IEC-compliant parameters with the Mann model as the benchmark. The vertical shear profile (see Equation (6)) is kept the same for all members, using the parameters from Table 3.

2.5.2. Test Case 2: Varying Shear Conditions

Information on the mean wind speed and direction profile over y and z is required for the generation of synthetic wind fields. The full coverage of the vertical profile, however, is rarely granted. Therefore, interpolation and extrapolation of velocity statistics from measured heights is necessary. The IEC 61400-13 guideline [2] defines the vertical wind shear below hub height as mandatory to be measured, whereas the measurements above are only recommended. In this case, however, abnormal shears above hub height might not be covered, such as low-level jets (LLJs) [41,42,61], although it was found that these phenomena can have a noticeable impact on the turbine loads [62].
Therefore, the second test case demonstrates the validation framework for the incorporation of spatially highly resolved vertical wind speed profile through LiDAR measurements compared to extrapolated statistics from met mast-based measurements. Therefore, we select an extreme case of an abnormal vertical profile and highlight the advantages of wind measurements along the full rotor. The development of the selected LLJs within the morning hours of the 20th of April 2021 is shown in Figure 5a.
The LiDAR wind speed measurements over the whole rotor for the 10 min period at 03:40 are shown in Figure 5b in red, and the cup anemometer measurements from the met mast (MM) as black crosses. The wind speeds at the corresponding heights do not show major deviations. However, the met mast measurements, reaching only up to hub height, give no indication of a developed LLJ. Therefore, based on the mast measurements, assumptions for the extrapolated wind speed profile have to be made. Firstly, the profile can be linearly extrapolated, using the two closest measurement locations (MM lin), extrapolated using nearest neighbor extrapolation (MM NN), or a fit of Equation (6) to the data (MM fit) can be performed. The resulting vertical profiles can be seen in Figure 5b. The height-dependent profile of the wind direction, which is often associated with an LLJ, is neglected here to focus on the impact of the wind speed profile only.
Since we want to evaluate the impact of these different extrapolation methods on the shear profile through the validation framework separately, we keep the turbulent parameters and turbulent fields constant by using the same 50 turbulence boxes with the Mann IEC parameters from test case 1 for all four members, while we vary only the vertical profile. The spatially higher resolved profile from the LiDAR serves in this test case as the benchmark member.

3. Results

This section presents the results obtained for the uncertainty evaluation of the synthetic wind fields, followed by the application of the load validation framework for the two previously defined test cases.

3.1. Uncertainties of Generated Turbulent Wind Fields

The results of the applied uncertainty model for synthetic wind fields can be seen in Figure 6.
The left plot shows the variation in the mean wind speed uncertainty u u ( z ) with the power law exponent uncertainty u α , while keeping the reference wind speed and the reference height with their associated uncertainties constant at ( 8.07 ± 0.10 m   s 1 and ( 114.7 ± 0.1 m , respectively. Whereas the uncertainties close to hub height do not vary, as u ( z ) / α becomes zero (see Equation (14)), they clearly increase with z / z ref . This deviation is enlarged with higher uncertainty in the determined power law exponent u α . Higher uncertainties in the reference wind speed u u ref show similar behavior, although the variation with height is increased with higher uncertainty in u ref . The contribution of the reference height uncertainty u z ref is pronounced smaller.
The plot on the right of Figure 6 shows the variance of the longitudinal wind speed at each grid point, averaged over 10,000 simulations with varying seeds and given target variance σ u 2 of 1 m 2 s 2 . The total variations of the turbulent longitudinal component σ turb ( y , z ) 2 can then be obtained from the MC results through the simulation-averaged variance of the longitudinal velocity component at each y z grid point. The uncertainties on the fluctuations through scaling and seeds are then calculated by subtracting the target variance from σ turb 2 ( y , z ) . It can be seen that, at the hub height position, the target variance is exactly retrieved. However, the variance increases with the distance | r 23 | from the reference scaling position. A maximum value of 0.116   m 2 s 2 can be found in the top-right corner.

3.2. Test Case 1: Influence of Turbulence Parameters

As the first test case focuses on turbulence parameters and models used for the whole turbine design or model validation process where they have an impact on the variation in resulting loads, we will focus on variations in the simulated SoI, expressed through the response metrics of the standard deviation.
The blade flapwise root bending moment (rbm) EDFs for the response metrics mean are displayed in Figure 7a, normalized with the median of the benchmark Mann IEC. The benchmark consistently indicates the highest mean flapwise bending moments among the four members. Nevertheless, the offset in the medians between the members is minor, reaching up to 0.5% lower estimation. The overall spread of the SRQs due to the seeds is small, ranging ±0.9% around the median for the benchmark. The KSEC wind models generate a similar distribution of SRQs, whereas the Mann meas model shows a slightly lower EDF slope, and the KSEC meas model presents the highest slope. Generally, the EDFs appear to be very linear except for the outer data points.
The same plot for the response metric standard deviation is shown in Figure 7b. Again, the spread due to the random seeds is similar for all members, but higher compared to the response metric mean. For the benchmark, the SRQ values vary by approximately ±21% around the median of the SRQs. Further, the SRQs from both IEC conformal wind fields align for a wide range of probabilities. The KSEC meas model results in a similar but deviating SRQ distribution of the blade’s flapwise root bending moment. To quantify this difference, the A M is calculated and indicated as the gray shaded area between the KSEC meas and the benchmark Mann IEC, obtaining a result of 0.015 · median (mean blade flapwise rbm) and an N A M of 0.23. The Mann model with turbulence parameters from measurements yields even higher standard deviation values, increasing the N A M up to 1.44.
The normalized area metrics of all response metrics and all members are shown exemplarily for the SoI blade flapwise rbm in Figure 8. The differences in the std and mean of the blade’s flapwise rbm, which were visible in Figure 7, are now quantified and displayed. As observed before, the area metric is the largest for the std SRQ obtained from the member Mann meas. This value can then be interpreted as the ratio of the model’s discrepancy to the variation in the benchmark due to random seeds. The N A M value for the IEC conformal KSEC wind fields approaches zero for the same combination of the SoI and response metric, indicating a high level of agreement with the benchmark.
Similar observations can be made for the remaining metrics showing deviations from the benchmark. Furthermore, for the response metric mean, a larger N A M can be found, even though the offset of the distributions is minor in relation to the median of the benchmark. However, it must be said that the variations due to the random seeds are small for the benchmark, resulting in a low R A N and thus higher N A M values. This observation, together with the fact that the area metric does not indicate whether a model over- or underpredicts an SRQ, will be discussed in Section 4. For the stDEL, higher SRQs are found for the Mann model with measured parameters, whereas the KSEC models generate lower stDEL compared to the benchmark, giving an N A M of 0.75.
Analyzing the N A M for various SoI and the response metric std in Figure 8b, it becomes visible that again the largest difference between the models and the benchmark is found for the Mann meas model.
The largest N A M can be found for the aforementioned blade flapwise bending rbm and the tower bottom fore-aft bending moment. Furthermore, the KSEC meas model generates higher N A M values of up to 0.76 for these SoI, whereas the IEC KSEC model provides normalized area metrics lower than 0.68 for all sensors, indicating a high agreement with the benchmark. Note that both result figures in Figure 8 provide no value for the Mann IEC model, as this model defines the benchmark in this test case.

3.3. Test Case 2: Influence of Vertical Shear Profile Measurements

Opposite to the first test case, this test case does not represent standard turbulent conditions but is used to illustrate rare and extraordinary shear situations. As mentioned before, the LiDAR-based wind field is used as a benchmark.
The EDFs of the maximum blade flapwise rbm are illustrated in Figure 9a. The variation in the maxima caused by the seed variation is in the order of ±13% of the benchmark medians for all members. However, the medians of the members differ considerably, over-predicting up to 20% of the reference median. The four members differ significantly and do not show any crossings of their EDFs. The LiDAR benchmark with real measurements over the whole rotor shows the lowest maximum blade flapwise rbm, followed by the met mast nearest neighbor extrapolation. The linear extrapolation of the met mast results in slightly higher loads, whereas the fitted wind profile from the met mast measurements indicates the highest loading. This is in line with the mean wind speeds that are modeled in the upper half of the rotor; see Figure 5.
The means of the hub nodding moment, shown in Figure 9b, indicate an even larger mismatch. Here, the offsets of the EDFs to the benchmark are remarkably larger for all members, without any overlap. In this case, for the MM fit model, an N A M of 187.9 is obtained (gray area in Figure 9b).
Corresponding N A M for the response metric max are displayed in Figure 10a. Compared to test case 1, significantly higher values are obtained. The largest differences to the benchmark can be found for the hub nodding moment, the tower bottom fore-aft moment and the flapwise bending moments. The highest maximum values for all SoI, except Tb_Mx, are obtained for the wind fields with the fitted power law. This wind field model is directly compared to the benchmark for all SoI and response metrics in Figure 10b. For all sensors, the largest N A M are found for the response metric mean, reaching N A M of up to 257.6 for hub nodding moment, which becomes negative for the benchmark only due to the low height of the LLJ. The calculated N A M for the stDEL are relatively low. Note that the stDEL is not given for the SoI GenPower and RotorSpeed due to the missing direct connection to a structural component. The N A M values are influenced by the R A N which is small relative to the A M . If the A M only is evaluated, the largest differences can be found for response metrics max and min of the tower base fore-aft bending moment.

4. Discussion

In this section, we discuss the findings of the introduced uncertainty model and results from the test cases, followed by the evaluation of the uncertainty model. Subsequently, we review the strengths and limitations of the proposed load model validation framework, which represents the main contribution of this work.

4.1. On the Uncertainties

The accurate modeling of the mean wind field is connected to certain uncertainties associated with uncertainties in the underlying wind measurements. Here, we developed an uncertainty model for synthetic wind fields that incorporates uncertainties in the mean wind field and uncertainties in the turbulent boxes due to scaling and random seeds. For the mean wind field uncertainty, we used the power law to describe the vertical wind shear. The uncertainties in the mean wind field here were shown to be highly dependent on the uncertainty in the power law exponent u α . In combination with measurements, this uncertainty can be reduced with an increased resolution of measured heights over the rotor; fewer measurements will cause higher uncertainty in the shear profile. Further, the distribution of measured heights should include the whole rotor size, not only heights below hub height, which was shown by the various shear profiles when information on the upper rotor half is missing; see Figure 5. In principle, any other analytical wind model describing the mean wind field can be used in the analytical uncertainty propagation, such as a logarithmic wind profile or abnormal shear profiles, e.g., LLJs.
Further, we evaluated the introduced variation in the longitudinal velocity component caused by the scaling of turbulent wind fields and the generation of random seeds. Due to the scaling methodology chosen, the target variance will be exactly retrieved at this position, reducing the total turbulent variation to the sum of the target variance and the uncertainty in this parameter ( σ u 2 + u σ u 2 ). Adjacent grid points remain with a lower added variation in the longitudinal component, which can be related to the coherence in the velocity components and consequently correlated statistics. With increasing distance from the reference point, the variation increases remarkably. Even though numerous simulations have been performed, the results do not fully converge, which can be seen by the spatial variation in the statistics in Figure 6b, although the variation is low, which indicates a limitation of this methodology. Further, the coherence and turbulent structures depend on the turbulence parameters, and results will vary with those parameters. For this study, the turbulence parameters have been assumed to be fixed without any related uncertainties for the simplification of the model and reduced calculation demand. Moreover, the uncertainty in the target variance u σ u 2 was assumed to be zero. The scaling methodology has an impact on the results as well. Here, we scaled the variance to a target value, based on the variance at hub height, as described by [53], whereas, e.g., Dimitrov et al. [20] performed the scaling based on the variance of the whole field. The interpretation of these uncertainties in the manner of load simulations was not performed here and will be part of the following investigation. Furthermore, exact measures of the uncertainties of the mean wind field and turbulence parameters will be given, as here we rely on reasonable assumptions.

4.2. On the Test Case Results

Two different wind field models with two sets of turbulent parameters have been compared with each other in test case 1 through the application of the proposed load validation framework. Firstly, the two sets of turbulence parameters have been determined from the IEC guidelines and high-frequency turbulence measurements. The Mann turbulence parameters ( Γ , L M ) from measurements notably deviate from the parameters proposed by the guidelines, indicating higher length scales, L M , and a lower anisotropy parameter, Γ , at the relevant heights. These findings align with the results in [20,21,51], which consistently found higher length scales and a lower anisotropy parameter for stable atmospheric conditions at the investigated heights of more than 100 m . For the KSEC model parameters, determined from the same measurements, however, the difference between the parameters and the IEC values is not as large. Thus, the underlying spectra and coherence have not been changed significantly. This might be caused by the fact that the spectrum is fitted for each parameter individually, obtaining a length scale and variance for each velocity component. Further, due to the logarithmic decay of the turbulent energy, the fits to Equation (18) are mainly determined by the spectral power at low wave numbers, resulting in a sub-optimal fit in the inertial subrange. This could be defeated by fitting the premultiplied spectra k · F ( k ) instead of F ( k ) .
The impact of varied turbulence parameters on simulated loads has then been evaluated with the validation framework. The largest differences from the benchmark of the IEC conformal Mann model were found for the Mann model with measured turbulence parameters. The findings of this study partly deviate from the conclusions of [20], which performed a comprehensive study to evaluate the impact of the Mann turbulence parameters onto simulated loads. They found a reduced DEL and extreme loads for most sensors (including blade flapwise bending moments) with increased length scale and increased anisotropy parameter, whereas the change in length scale has a larger impact and thus lower loads are found for measured turbulence parameters. However, our study differs in some parts from their evaluations. Firstly, Dimitrov et al. [20] used a lower length scale of L M = 29.4   m , according to the IEC guideline 61400-1:2005 for the benchmark, and, instead of evaluating the short-term DEL, they extrapolated the DEL from each wind speed bin for the whole extrapolated lifetime. Secondly, the IEC class A was assumed by Dimitrov et al. [20]. Here, however, we only consider one wind speed below the rated and no extrapolation, a lower turbulence intensity, and a different turbine model with different controller settings. The study of Nybø et al. [52], on the other hand, aligns with the findings in this contribution, showing an increased DEL for similar length scales ( 73.6   m in their study).
For the second test case, the observed N A M values are considerably larger compared to test case 1. This is caused by the major differences in the mean wind field, impacting all considered SoI. Again, the benchmarks’ variation in SRQs through the seed variation is much smaller than the mismatch of the wind field models. Larger nodding moments and blade flapwise rbm can be found, caused by the greater wind speeds at the heights above hub height, as well as the stronger gradient. Due to the higher velocity and contained energy in the inflow of the whole rotor, the extracted power also increases significantly.
However, these results also highlight the importance of an accurate mean wind field apart from turbulence modeling. As shown before, an IEC-compliant met mast with measurements up to hub height has not given any indication of this event, and the situation would have created an undetected outlier in a potential load model validation. Therefore, wind speed measurements over the whole rotor are necessary to detect these events. However, it must be noted that these situations are quite rare at the evaluated site [41].

4.3. On the Load Model Validation Framework and Area Metric

The introduced load validation framework provides a sound basis for the evaluation of synthetic wind field models and load model validation. The strengths of the proposed metric are manifold and are not limited to the herein-demonstrated application. Having simulated all combinations of interest, the framework, and the metrics can give a comprehensive but quick overview of mismatches and agreements of models. Further, this is performed based on a quantifiable metric, which can be compared between sensors of different physical units through the introduced normalization. Differentiations between models quickly become visible and larger amounts of simulation results (here 50 seeds × 4 members) can be concisely condensed in a few metrics. Furthermore, these dimensions can be easily extended, providing high flexibility of the framework. The normalized area metric makes it possible to classify differences between a model and benchmark in relation to the spread within the benchmark sample. N A M < 1 means that the variations due to different models are smaller than the variations due to the seeds. This is an easy-to-understand and easy-to-interpret measure.
Additionally, this allows for the comprehensive evaluation of simulation uncertainties through the analysis of the impact of the random seeds. The variation in the SRQ within the EDF, however, only represents the variation due to the random seed variation. If the measurement and, consequently, the wind field uncertainties are considered, the increase in spread also needs to be considered. The proposed framework allows access to these variations by considerating the whole distribution of the SRQ for different SoIs and varying wind fields. Evaluating a NAM table with different SoIs and different response metrics allows for comprehensive insight into the model’s accuracy. On the one hand, different SoIs take into account different characteristics of the model’s behavior. For example, the blade root bending moment describes the behavior of a blade, whereas the hub nodding moment considers the entire rotor. On the other hand, different response metrics take into account different characteristics of the time series. The mean value indicates an offset between time series, whereas the std or stDEL reflects the fluctuation within a time series. Finally, the normalized area metric compares the entire SRQ sample, not just partial aspects such as the mean or the standard deviation of the distributions.
However, the framework and the method of the area metric calculation implicate certain limitations. The first limitation lies within the definition of the area metric as a non-negative valued metric with physical units, as explained before. The area metric does not indicate whether a model shows a consistent over- or underestimation of the SRQ, or whether it even crosses the benchmark EDF, but it shows an entirely different distribution of the simulated SRQ. In the first test case, most response metrics for the blade flapwise rbm consistently showed higher SRQ compared to the benchmark, except for the response metric mean, for which all models showed lower readings. This information, however, is not indicated in the N A M reading presented in Figure 8, which might make the additional investigation of the EDF plot necessary. An adaptation of the A M calculation is considered for future contributions. Furthermore, the area metric compares two EDFs that are generated from the samples. The information on the order of occurrence of the SRQ values within a sample is lost in the process. Thus, the area metric is not explicitly intended for comparing paired samples. However, when using constrained turbulence and identical seed numbers, the samples of different members can be paired. See also the time series in Figure 4b.
Another limitation lies within the proposed normalization of the area metric. Whereas the concept of normalization directly quantifies the mismatch of a model to the benchmark relative to the variation in the benchmark’s SRQ value due to random seeds, the divisor approaching zero has a significant impact on the normalized value. If only n = 1 realizations of the benchmark are available or a deterministic model without any variation in the SRQ is given, the normalized value is not defined due to division by zero. In reality, we will always have a variation in the benchmark SRQs, however, as shown in the first test case for the mean of the blade flapwise rbm, where the N A M values were shown to be relatively high compared to the remaining SoI caused by only minor variation in the benchmarks’ response metric values. Consequently, the N A M should not be misinterpreted as an indication of the absolute mismatch between a model and a benchmark, but, rather, it should be interpreted relative to the benchmark´s initial variation due to random seeds. Further normalization approaches are necessary and will be the work of future studies. Moreover, not every combination of response metric and SoI is of reasonable and of interest, e.g., the stDEL of the generator power. The application shows that the correct use and interpretation of the metric results lies within the user of the framework. Finally, the framework only includes a validation assessment, which involves an assessment of model accuracy by comparing model predictions with experimental data. Thus, the predictive capability of the model beyond the validation domain is not yet covered.
In this contribution, the proposed metric and framework have been tested and demonstrated, however, no real load measurements have been included. A benchmark has been defined to compare against. Nevertheless, the validation methodology incorporates the truth as a benchmark, derived from measurements. Therefore, a comprehensive validation of the load model for the considered research turbine, utilizing actual measurements and different atmospheric conditions, is the aim. Further, the framework will be extended to further evaluation metrics, to incorporate not only metrics on simulation and measurement individually but also a metric between simulation and measurement, such as a mean absolute error. Further validation metrics will be applied to extend the framework for the modified area validation metric [63]. Lastly, the uncertainties in the relevant wind field parameters will be evaluated through the framework to quantify resulting uncertainties in aeroelastic loads.

5. Conclusions

In this contribution, a load model validation framework was introduced, providing access to a novel metric to quantify model agreements with a benchmark, which can be used to evaluate varying synthetic turbulent wind field models. For demonstration and testing purposes, the framework was applied to two test cases with varied synthetic wind fields. The focus was put on turbulence parameters for different models and abnormal shear events, respectively. Wind field parameters have been derived from the IEC guideline and comprehensive wind measurements at the Testfeld BHV. A set of numerous simulations has been performed with the load model of the AD8-180 research wind turbine. The application highlighted the advantages of the proposed normalized area metric, which quantitatively indicates the mismatch of a model to a benchmark in relation to the variation in SRQ of the benchmark caused by the random seed variations in the synthetic wind field. The process of deriving wind field parameters from measurements associated with uncertainties requires the evaluation of those uncertainties in the manner of load simulations. Therefore, an uncertainty model describing variations of the wind speed components in synthetic wind fields due to measurement uncertainties was introduced and presented. The main findings of this study can be summarized as:
  • A framework and a validation metric have been introduced, providing access to a quantifiable evaluation of a load model agreement with a benchmark and, consequently, a metric for the confidence in the correctness of load models;
  • The framework was applied to two test cases, demonstrating its flexibility and summarizing the capabilities of the introduced metric, reducing information from numerous simulations to single quantities;
  • For the Testfeld BHV site, the turbulence parameters extracted from high, frequent three-dimensional measurements deviate from the IEC conformal parameters. Adjusting the parameters in load simulations has a noticeable impact on the simulated aeroelastic wind turbine loads;
  • Uncertainties due to spatially lower resolved wind measurements can in extreme cases, such as in LLJs, result in extreme deviations in the considered load sensors and, thus, should be reduced by spatially higher resolved measurements;
  • A model for the uncertainty quantification in the longitudinal wind speed component of synthetic wind was demonstrated, finding significant dependency of the scaling approaches and input uncertainties on these variations.
The proposed framework and metric provide a sound basis for wind turbine load model validations. Both, strengths and limitations are outlined, providing access to the intended and correct use of the framework, which can not only be applied to load model validation but also the evaluation of load simulation inputs such as different synthetic wind fields.

Author Contributions

Conceptualization: all; methodology: M.L.H. designed the validation framework, P.J.M. designed uncertainty quantification; validation: P.J.M. supported by M.L.H.; formal analysis: M.L.H. and P.J.M. conducted load simulation analysis, P.J.M. performed wind data analysis; investigation: M.L.H. set up the simulation framework. P.J.M. performed the final simulations; writing—original draft preparation, P.J.M. and M.L.H.; writing—review and editing, all; visualization, mainly P.J.M.; supervision, J.G.; project administration, J.G.; funding acquisition, J.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Federal Ministry for Economic Affairs and Climate Action on the basis of the decision by the German Bundestag under grant numbers 03EE2001 (HighRE), 0324148 (Testfeld BHV), and 03EE2031C (EMUwind).

Data Availability Statement

The 10 min averaged wind speeds from the vertical profiler and cup anemometers shown in Figure 5, together with the ensemble-averaged spectra shown in Figure 4 were made publicly available and can be accessed via https://www.doi.org/10.5281/zenodo.10409777 (accessed on 1 February 2024). The generated wind fields, python codes for the generation of them, and the codes to generate the figures can be requested from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. IEC 61400-1:2019; Wind Energy Generation Systems—Part 1: Design Requirements. IEC: Geneva, Switzerland, 2019.
  2. IEC 61400-13:2015; Wind Energy Generation Systems—Part 13: Measurement of Mechanical Loads. IEC: Geneva, Switzerland, 2015.
  3. Hills, R.; Maniaci, D.; Naughton, J. V&V Framework; Sandia National Lab.: Albuquerque, NM, USA, 2015. [CrossRef]
  4. Verdonck, H.; Hach, O.; Polman, J.D.; Braun, O.; Balzani, C.; Müller, S.; Rieke, J. An open-source framework for the uncertainty quantification of aeroelastic wind turbine simulation tools. J. Phys. Conf. Ser. 2022, 2265, 042039. [Google Scholar] [CrossRef]
  5. Popko, W.; Vorpahl, F.; Jonkman, J.; Robertson, A. OC3 and OC4 projects—Verification benchmark exercises of the state-of-the-art coupled simulation tools for offshore wind turbines. In Proceedings of the 7th European Seminar Offshore Wind and Other Marine Renewable Energies in Mediterranean and European Seas, Rome, Italy, 5–7 September 2012; pp. 499–504. [Google Scholar]
  6. Popko, W.; Vorpahl, F.; Zuga, A.; Kohlmeier, M.; Jonkman, J.; Robertson, A.; Larsen, T.J.; Yde, A.; Sætertrø, K.; Okstad, K.M.; et al. Offshore Code Comparison Collaboration Continuation (OC4), Phase I—Results of coupled simulations of an offshore wind turbine with jacket support structure. J. Ocean Wind Energy 2014, 1, 1–11. [Google Scholar]
  7. Popko, W.; Huhn, M.L.; Robertson, A.; Jonkman, J.; Wendt, F.; Müller, K.; Kretschmer, M.; Vorpahl, F.; Hagen, T.R.; Galinos, C.; et al. Verification of a Numerical Model of the Offshore Wind Turbine From the Alpha Ventus Wind Farm within OC5 Phase III. In Proceedings of the ASME 37th International Conference on Ocean, Offshore and Arctic Engineering—2018, Madrid, Spain, 17–22 June 2018; The American Society of Mechanical Engineers: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  8. Robertson, A.N.; Wendt, F.; Jonkman, J.M.; Popko, W.; Dagher, H.; Gueydon, S.; Qvist, J.; Vittori, F.; Azcona, J.; Uzunoglu, E.; et al. OC5 Project Phase II: Validation of Global Loads of the DeepCwind Floating Semisubmersible Wind Turbine. Energy Procedia 2017, 137, 38–57. [Google Scholar] [CrossRef]
  9. Söker, H.; Damaschke, M.; Illig, C.; Kröning, J.; Cosack, N. A Guide to Design Load Validation. In Proceedings of the 8th Deutsche Windenerige-Konferenz (DEWEK), Wilhelmshaven, Germany, 22–23 November 2006. [Google Scholar]
  10. Zierath, J.; Rachholz, R.; Woernle, C.; Müller, A. Load Calculation on Wind Turbines: Validation of Flex5, Alaska/Wind, MSC.Adams and SIMPACK by means of Field Tests. In Proceedings of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference—2014, Buffalo, NY, USA, 17–20 August 2014; ASME: New York, NY, USA, 2014. [Google Scholar] [CrossRef]
  11. Zierath, J.; Rachholz, R.; Woernle, C. Field test validation of Flex5, MSC.Adams, alaska/Wind and SIMPACK for load calculations on wind turbines. Wind Energy 2016, 19, 1201–1222. [Google Scholar] [CrossRef]
  12. Dimitrov, N.; Natarajan, A. Application of simulated lidar scanning patterns to constrained Gaussian turbulence fields for load validation. Wind Energy 2017, 20, 79–95. [Google Scholar] [CrossRef]
  13. Oberkampf, W.L.; Barone, M.F. Measures of agreement between computation and experiment: Validation metrics. J. Comput. Phys. 2006, 217, 5–36. [Google Scholar] [CrossRef]
  14. Oberkampf, W.L.; Roy, C.J. Verification and Validation in Scientific Computing; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar] [CrossRef]
  15. Ferson, S.; Oberkampf, W.L.; Ginzburg, L. Model validation and predictive capability for the thermal challenge problem. Comput. Methods Appl. Mech. Eng. 2008, 197, 2408–2430. [Google Scholar] [CrossRef]
  16. Hills, R.G. Model Validation: Model Parameter and Measurement Uncertainty. J. Heat Transf. 2006, 128, 339–351. [Google Scholar] [CrossRef]
  17. Sargent, R.G. An interval statistical procedure for use in validation of simulation models. J. Simul. 2014, 9, 232–237. [Google Scholar] [CrossRef]
  18. Zhang, R.; Mahadevan, S. Bayesian methodology for reliability model acceptance. Reliab. Eng. Syst. Saf. 2003, 80, 95–103. [Google Scholar] [CrossRef]
  19. Chen, W.; Xiong, Y.; Tsui, K.; Wang, S. A Design-Driven Validation Approach Using Bayesian Prediction Models. J. Mech. Des. 2008, 130, 021101. [Google Scholar] [CrossRef]
  20. Dimitrov, N.; Natarajan, A.; Mann, J. Effects of normal and extreme turbulence spectral parameters on wind turbine loads. Renew. Energy 2016, 101, 1180–1193. [Google Scholar] [CrossRef]
  21. Pena, A. Østerild: A natural laboratory for atmospheric turbulence. J. Renew. Sustain. Energy 2019, 11, 063302. [Google Scholar] [CrossRef]
  22. Chougule, A.; Mann, J.; Kelly, M.; Larsen, G.C. Modeling Atmospheric Turbulence via Rapid Distortion Theory: Spectral Tensor of Velocity and Buoyancy. J. Atmos. Sci. 2017, 74, 949–974. [Google Scholar] [CrossRef]
  23. Chougule, A.; Mann, J.; Kelly, M.; Larsen, G.C. Simplification and Validation of a Spectral-Tensor Model for Turbulence Including Atmospheric Stability. Bound.-Layer Meteorol. 2018, 167, 371–397. [Google Scholar] [CrossRef]
  24. Yassin, K.; Helms, A.; Moreno, D.; Kassem, H.; Höning, L.; Lukassen, L.J. Applying a random time mapping to Mann-modeled turbulence for the generation of intermittent wind fields. Wind. Energy Sci. 2023, 8, 1133–1152. [Google Scholar] [CrossRef]
  25. Friedrich, J.; Moreno, D.; Sinhuber, M.; Waechter, M.; Peinke, J. Superstatistical wind fields from point-wise atmospheric turbulence measurements. arXiv 2022, arXiv:2203.16948. [Google Scholar] [CrossRef]
  26. Rinker, J.M. PyConTurb: An open-source constrained turbulence generator. IOP Conf. Ser. J. Phys. Conf. Ser. 2018, 2018, 062032. [Google Scholar] [CrossRef]
  27. Rinker, J.M. Uncertainty in loads for different constraint patterns in constrained-turbulence generation. J. Phys. Conf. Ser. 2020, 1618, 052053. [Google Scholar] [CrossRef]
  28. Rinker, J.M. Impact of rotor size on aeroelastic uncertainty with lidar-constrained turbulence. J. Phys. Conf. Ser. 2022, 2265, 032011. [Google Scholar] [CrossRef]
  29. Dimitrov, N.; Borraccino, A.; Pena, A.; Natarajan, A.; Mann, J. Wind turbine load validation using lidar-based wind retrievals. Wind Energy 2019, 22, 1512–1533. [Google Scholar] [CrossRef]
  30. Pettas, V.; Costa García, F.; Kretschmer, M.; Rinker, J.M.; Clifton, A.; Cheng, P.W. A numerical framework for constraining synthetic wind fields with lidar measurements for improved load simulations. In Proceedings of the AIAA Scitech 2020 Forum, Orlando, FL, USA, 6–10 January 2020; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2020. [Google Scholar] [CrossRef]
  31. ASME V&V 10; Standard for Verification and Validation in Computational Solid Mechanics: An International Standard. American Society of Mechanical Engineers: New York, NY, USA, 2019.
  32. Fricke, J.; Wiens, M.; Requate, N.; Leimeister, M. Python Framework for Wind Turbines Enabling Test Automation of MoWiT. In Proceedings of the 14th Modelica Conference 2021, Linkoping, Sweden, 20–24 September 2021; pp. 403–409. [Google Scholar] [CrossRef]
  33. Robertson, A.; Bachynski, E.E.; Gueydon, S.; Wendt, F.; Schünemann, P. Total experimental uncertainty in hydrodynamic testing of a semisubmersible wind turbine, considering numerical propagation of systematic uncertainty. Ocean Eng. 2020, 195, 106605. [Google Scholar] [CrossRef]
  34. Suo, B.; Qi, Y.; Sun, K.; Xu, J. A Novel Model Validation Method Based on Area Metric Disagreement between Accelerated Storage Distributions and Natural Storage Data. Mathematics 2023, 11, 2511. [Google Scholar] [CrossRef]
  35. Huhn, M.L.; Gómez-Mejía, A.F. Aeroelastic model validation with 8 MW field measurements: Influence of constrained turbulence with focus on power performance. J. Phys. Conf. Ser. 2022, 2265, 032058. [Google Scholar] [CrossRef]
  36. Wegner, A.; Huhn, M.L.; Mechler, S.; Thomas, P. Identification of torsional frequencies of a large rotor blade based on measurement and simulation data. J. Phys. Conf. Ser. 2022, 2265, 032021. [Google Scholar] [CrossRef]
  37. Leimeister, M.; Kolios, A.; Collu, M. Development and Verification of an Aero-Hydro-Servo-Elastic Coupled Model of Dynamics for FOWT, Based on the MoWiT Library. Energies 2020, 13, 1974. [Google Scholar] [CrossRef]
  38. Feja, P.; Huhn, M. Real Time Simulation of Wind Turbines for HiL Testing with MoWiT. In Proceedings of the Wind Energy Science Conference 2019 (WESC 2019), Cork, Ireland, 17–20 June 2019. [Google Scholar] [CrossRef]
  39. Neshati, M.; Zhang, H.; Thomas, P.; Heller, M.; Zuga, A.; Wenske, J. Evaluation of a Hardware-in-the-loop Test Setup Using Mechanical Measurements with a DFIG Wind Turbine Nacelle. J. Phys. Conf. Ser. 2022, 2265, 022105. [Google Scholar] [CrossRef]
  40. Taylor, G.I. The spectrum of turbulence. Proc. R. Soc. Lond. Ser. Math. Phys. Sci. 1938, 164, 476–490. [Google Scholar] [CrossRef]
  41. Rubio, H.; Kühn, M.; Gottschall, J. Evaluation of low-level jets in the southern Baltic Sea: A comparison between ship-based lidar observational data and numerical models. Wind Energy Sci. 2022, 7, 2433–2455. [Google Scholar] [CrossRef]
  42. Hallgren, C.; Aird, J.A.; Ivanell, S.; Körnich, H.; Barthelmie, R.J.; Pryor, S.C.; Sahlée, E. Brief communication: On the definition of the low-level jet. Wind Energy Sci. 2023, 8, 1651–1658. [Google Scholar] [CrossRef]
  43. Kaimal, J.C.; Wyngaard, J.C.; Izumi, Y.; Coté, O.R. Spectral characteristics of surface-layer turbulence. Q. J. R. Met. Soc. 1972, 98, 563–589. [Google Scholar] [CrossRef]
  44. Veers, P.S. Three-Dimensional Wind Simulation; Sandia National Labs.: Albuquerque, NM, USA, 1988.
  45. Mann, J. The spatial structure of neutral atmospheric surface-layer turbulence. J. Fluid Mech. 1994, 273, 141–168. [Google Scholar] [CrossRef]
  46. Mann, J. Wind field simulation. Probab. Eng. Mech. 1998, 13, 269–282. [Google Scholar] [CrossRef]
  47. Jonkman, B.J. TurbSim User’s Guide: V2.00.00: Draft Version; National Renewable Energy Laboratory: Golden, CO, USA, 2016.
  48. Mann, J. Standalone Mann-Turbulence Generator V2.0. 2018. Available online: https://www.hawc2.dk/install/standalone-mann-generator (accessed on 1 February 2024).
  49. Nybø, A.; Nielsen, F.G.; Reuder, J.; Churchfield, M.J.; Godvik, M. Evaluation of different wind fields for the investigation of the dynamic response of offshore wind turbines. Wind Energy 2020, 23, 1810–1830. [Google Scholar] [CrossRef]
  50. Doubrawa, P.; Churchfield, M.J.; Godvik, M.; Sirnivas, S. Load response of a floating wind turbine to turbulent atmospheric flow. Appl. Energy 2019, 242, 1588–1599. [Google Scholar] [CrossRef]
  51. Chougule, A.; Mann, J.; Segalini, A.; Dellwik, E. Spectral tensor parameters for wind turbine load modeling from forested and agricultural landscapes. Wind Energy 2015, 18, 469–481. [Google Scholar] [CrossRef]
  52. Nybø, A.; Nielsen, F.G.; Godvik, M. Analysis of turbulence models fitted to site, and their impact on the response of a bottom-fixed wind turbine. J. Phys. Conf. Ser. 2021, 2018, 012028. [Google Scholar] [CrossRef]
  53. Larsen, T.J.; Hansen, A.M. How 2 HAWC2: The User’s Manual: Risø-R-1597; Risø National Laboratory: Roskilde, Denmark, 2021.
  54. Liew, J.; Larsen, G.C. How does the quantity, resolution, and scaling of turbulence boxes affect aeroelastic simulation convergence? J. Phys. Conf. Ser. 2022, 2265, 032049. [Google Scholar] [CrossRef]
  55. JCGM. Evaluation of Measurement Data—Guide to the Expression of Uncertainty in Measurements (GUM): JCGM 100:2008: GUM 1995 with Minor Corrections. Available online: https://www.bipm.org/documents/20126/2071204/JCGM_100_2008_E.pdf/cb0ef43f-baa5-11cf-3f85-4dcd86f77bd6 (accessed on 1 February 2024).
  56. JCGM. Evaluation of Measurement Data—Supplement 1 to the Guide of the Expression of Uncertainty in Measurement—Propagation of Distributions Using a Monte Carlo Method: JCGM 101:2008. Available online: https://www.bipm.org/documents/20126/2071204/JCGM_101_2008_E.pdf (accessed on 1 February 2024).
  57. Liew, J. jaimeliew1/Mann.rs: Publish Mann.rs v1.0.0. Zenodo 2022. [Google Scholar] [CrossRef]
  58. Meyer, P.J.; Gottschall, J. Evaluation of the “fan scan” based on three combined nacelle lidars for advanced wind field characterisation. J. Phys. Conf. Ser. 2022, 2265, 022107. [Google Scholar] [CrossRef]
  59. Hung, L.Y.; Santos, P.; Gottschall, J. A comprehensive procedure to process scanning lidar data for engineering wake model validation. J. Phys. Conf. Ser. 2022, 2265, 022091. [Google Scholar] [CrossRef]
  60. Giyanani, A.; Sjöholm, M.; Rolighed Thorsen, G.; Schuhmacher, J.; Gottschall, J. Wind speed reconstruction from three synchronized short-range WindScanner lidars in a large wind turbine inflow field campaign and the associated uncertainties. J. Phys. Conf. Ser. 2022, 2265, 022032. [Google Scholar] [CrossRef]
  61. Gutierrez, W.; Araya, G.; Kiliyanpilakkil, P.; Ruiz-Columbie, A.; Tutkun, M.; Castillo, L. Structural impact assessment of low level jets over wind turbines. J. Renew. Sustain. Energy 2016, 8, 023308. [Google Scholar] [CrossRef]
  62. Zhang, X.; Yang, C.; Li, S. Influence of the Heights of Low-Level Jets on Power and Aerodynamic Loads of a Horizontal Axis Wind Turbine Rotor. Atmosphere 2019, 10, 132. [Google Scholar] [CrossRef]
  63. Voyles, I.T.; Roy, C.J. Evaluation of Model Validation Techniques in the Presence of Uncertainty. In Proceedings of the 16th AIAA Non-Deterministic Approaches Conference, National Harbor, MD, USA, 13–17 January; American Institute of Aeronautics and Astronautics: Reston, VA, USA, 2014. [Google Scholar] [CrossRef]
Figure 1. Schematic visualization of the load validation framework.
Figure 1. Schematic visualization of the load validation framework.
Energies 17 00797 g001
Figure 2. Illustration of EDF plots and area metric calculations for two imaginary examples with (a) offsets and (b) different spreads of the EDF. B ( x ) is the benchmark, and S 1 ( x ) , S 2 ( x ) are exemplary model distributions.
Figure 2. Illustration of EDF plots and area metric calculations for two imaginary examples with (a) offsets and (b) different spreads of the EDF. B ( x ) is the benchmark, and S 1 ( x ) , S 2 ( x ) are exemplary model distributions.
Energies 17 00797 g002
Figure 3. Aerial view of the Testfeld BHV test site with the AD8-180 wind turbine and met mast as circle and star, respectively, (a) and wind speed measurement heights, indicated as dots and crosses (b), used in this work.
Figure 3. Aerial view of the Testfeld BHV test site with the AD8-180 wind turbine and met mast as circle and star, respectively, (a) and wind speed measurement heights, indicated as dots and crosses (b), used in this work.
Energies 17 00797 g003
Figure 4. Fitted Mann spectra from the turbulence measurements of the sonic at 110 m above ground level (a) and extraction of an exemplary time series at hub position for the two different spectra (b).
Figure 4. Fitted Mann spectra from the turbulence measurements of the sonic at 110 m above ground level (a) and extraction of an exemplary time series at hub position for the two different spectra (b).
Energies 17 00797 g004
Figure 5. Development of the LLJ profile as measured by the vertically profiling LiDAR (a) and the derived shear profiles for the second test case (b). The considered 10 min period is indicated as a red vertical line in (a). The rotor area is shaded in gray, and the hub height is indicated as a dashed line.
Figure 5. Development of the LLJ profile as measured by the vertically profiling LiDAR (a) and the derived shear profiles for the second test case (b). The considered 10 min period is indicated as a red vertical line in (a). The rotor area is shaded in gray, and the hub height is indicated as a dashed line.
Energies 17 00797 g005
Figure 6. Results of the uncertainty model. Variation in the mean wind speed uncertainty over varying power law exponent uncertainties (a) and MC results from 10,000 simulations (b).
Figure 6. Results of the uncertainty model. Variation in the mean wind speed uncertainty over varying power law exponent uncertainties (a) and MC results from 10,000 simulations (b).
Energies 17 00797 g006
Figure 7. EDF of the blade flapwise rbm for test case 1 with response metric mean (a) and std (b). The area metric of KSEC meas with respect to Mann IEC is shaded in gray.
Figure 7. EDF of the blade flapwise rbm for test case 1 with response metric mean (a) and std (b). The area metric of KSEC meas with respect to Mann IEC is shaded in gray.
Energies 17 00797 g007
Figure 8. Normalized area metric results for test case 1, colored by the N A M value: (a) N A M for different response metrics and the SoI blade flapwise rbm and (b) N A M for various SoI and the response metric std.
Figure 8. Normalized area metric results for test case 1, colored by the N A M value: (a) N A M for different response metrics and the SoI blade flapwise rbm and (b) N A M for various SoI and the response metric std.
Energies 17 00797 g008
Figure 9. Normalized EDFs for test case 2: (a) max blade flapwise rbm, (b) mean hub nodding moment.
Figure 9. Normalized EDFs for test case 2: (a) max blade flapwise rbm, (b) mean hub nodding moment.
Energies 17 00797 g009
Figure 10. Normalized area metrics for test case 2, colored according to their value: (a) N A M for response metric max and (b) N A M for the member MM fit.
Figure 10. Normalized area metrics for test case 2, colored according to their value: (a) N A M for response metric max and (b) N A M for the member MM fit.
Energies 17 00797 g010
Table 1. Evaluated SoIs, including bending moments (bm).
Table 1. Evaluated SoIs, including bending moments (bm).
SoIDescription
GenPowergenerator power
RotorSpeedrotor speed
Blade_My_rootblade flapwise root bm
Blade_Mx_rootblade edgewise root bm
Tb_Mytower bottom fore-aft bm
Tb_Mxtower bottom side-side bm
Hub_stat_Myhub nodding moment
Table 2. Applied response metrics, including Damage Equivalent Loads (DELs).
Table 2. Applied response metrics, including Damage Equivalent Loads (DELs).
MetricDescription
meanaverage of SoI time series
stdstandard deviation
minminimum
maxmaximum
stDELshort-term DEL
Table 3. Wind field parameters for the test cases (TC) of the framework.
Table 3. Wind field parameters for the test cases (TC) of the framework.
TCTI V hub ShearTurb. ParametersTurb. ModelFocus
120.2%8.07 m   s 1 Equation (6), α = 0.2 IEC, measurementsKSEC, Mannturbulence
220.2%8.07 m   s 1 measurementsIECMannshear
Table 4. IEC parameters and resulting parameters from the spectral fits to Equation (18).
Table 4. IEC parameters and resulting parameters from the spectral fits to Equation (18).
Mann Uniform ShearKSEC
Parameter Γ L M σ 2 σ 1 σ 3 σ 1 L 1 L 2 L 3
IEC3.9 33.6   m 0.800.50 340.2   m 113.4   m 27.7   m
Fit3.59 71.2   m 0.780.59 407.0   m 97.8   m 44.9   m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Meyer, P.J.; Huhn, M.L.; Gottschall, J. Development of a Load Model Validation Framework Applied to Synthetic Turbulent Wind Field Evaluation. Energies 2024, 17, 797. https://doi.org/10.3390/en17040797

AMA Style

Meyer PJ, Huhn ML, Gottschall J. Development of a Load Model Validation Framework Applied to Synthetic Turbulent Wind Field Evaluation. Energies. 2024; 17(4):797. https://doi.org/10.3390/en17040797

Chicago/Turabian Style

Meyer, Paul J., Matthias L. Huhn, and Julia Gottschall. 2024. "Development of a Load Model Validation Framework Applied to Synthetic Turbulent Wind Field Evaluation" Energies 17, no. 4: 797. https://doi.org/10.3390/en17040797

APA Style

Meyer, P. J., Huhn, M. L., & Gottschall, J. (2024). Development of a Load Model Validation Framework Applied to Synthetic Turbulent Wind Field Evaluation. Energies, 17(4), 797. https://doi.org/10.3390/en17040797

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop