Next Article in Journal
Interactions between Hydrodynamic Forcing, Suspended Sediment Transport, and Morphology in a Microtidal Intermediate-Dissipative Beach
Previous Article in Journal
Impact of Anthropogenic Activities on Sedimentary Records in the Lingdingyang Estuary of the Pearl River Delta, China
Previous Article in Special Issue
Enhancing the Stability of Small Rescue Boats: A Study on the Necessity and Impact of Hydraulic Interconnected Suspensions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A System-Level Reliability Growth Model for Efficient Inspection Planning of Offshore Wind Farms

Department of Ocean Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(7), 1140; https://doi.org/10.3390/jmse12071140 (registering DOI)
Submission received: 14 May 2024 / Revised: 3 July 2024 / Accepted: 4 July 2024 / Published: 6 July 2024
(This article belongs to the Special Issue Advances in the Performance of Ships and Offshore Structures)

Abstract

:
Fatigue damage can lead to failures of structural systems. To reduce the failure risk and enhance the reliability of structural systems, inspection and maintenance interventions are required, and it is important to develop an efficient inspection strategy. This study, for the first time, develops a system-level reliability growth model to establish efficient inspection planning. System-level reliability growth is defined as an increase in the percentage of the system reliability index with and without inspection. The probabilistic S-N approach is used to obtain the reliability index without inspection. Moreover, advanced risk analysis and Bayesian inference techniques are used to obtain the reliability index with inspection. The optimal inspection planning is obtained by maximizing system-level reliability growth. This model is applied to an offshore wind farm. The results show that inspection efficiency can be improved by increasing the number of repair objects in response to a ‘detection’ inspection outcome, changing the inspection object for each inspection, and increasing the inspection quality. The maximum system-level reliability growth gained from one additional inspection decreases as the number of inspections increases. This study quantifies the inspection efficiency of offshore wind farms by explicit system-level reliability growth computation, offering valuable insights for promoting sustainable energy solutions.

1. Introduction

Fatigue deterioration is a common phenomenon in offshore structures such as ships, offshore platforms, offshore wind turbines (OWTs), etc. [1]. Fatigue cracks usually start at pre-existing defects or stress concentrations and then gradually propagate, eventually leading to structural failure [2,3,4]. This failure may cause great damage to the ocean environment, human life, and property [5,6,7]. Thus, it is important to ensure the reliability of structural systems.
Many classical system reliability analysis methods have been widely adopted in previous studies. One is fault tree analysis (FTA), which focuses on analyzing the causes and paths of system failures. Volkanovski et al. [8] proposed a novel approach for assessing the reliability of power systems using FTA, which can identify the key elements in the power systems. Ding et al. [9] established a new method to assess the reliability of residual heat removal systems, and FTA was employed to analyze the logical relationship among events. Another is event tree analysis (ETA), which can analyze all the possible sequences of events and predict the failure probability of the system. Lu et al. [10] evaluated the system reliability of cable-stayed bridges using ETA incorporating the β -unzipping method. Researchers also have used ETA in reliability analysis considering inspection and maintenance interventions. Kim and Frangopol [11] quantified the relationship between damage detection delay (associated with reliability) and inspection strategies for fatigue deteriorating structures by ETA. Zou et al. [1] proposed a holistic approach for inspection and maintenance decision-making and obtaining optimum fatigue reliability using ETA. Thus, ETA is employed as a basic tool in this study.
Inspection and maintenance can gain new information on the state of the system, thereby reducing uncertainties in system reliability assessment. Bayesian inference can update the prior probabilistic model based on the new information and quantify the effect of inspection and maintenance on system reliability [12]. Classical reliability calculation methods, such as the first-order reliability method (FORM), are efficient in evaluating system reliability but inconvenient for Bayesian updating [13]. To integrate Bayesian updating into system reliability calculation methods, researchers have proposed many methods. Bayesian network (BN) is one of the most popular models due to its intuitive nature and ability to address correlations among multiple variables. Straub [14] established a model for computationally efficient and robust reliability analysis of probabilistic deterioration processes based on a dynamic Bayesian network (DBN). Groden and Collette [15] combined a probabilistic fatigue deterioration model with a permanent set model by BN to update the reliability of marine structures. Hackl and Kohler [16] proposed a model that combines structural reliability analysis and BN to estimate the reliability of reinforced concrete structures, and this model can be updated based on the outcomes of inspection and monitoring. Another is Markov Chain Monte Carlo (MCMC), aiming at overcoming the inefficiency of Monte Carlo simulation (MCS) in estimating small failure probabilities. Song et al. [17] proposed an efficient reliability sensitivity algorithm based on the conditional samples generated by MCMC, and this algorithm is suitable for highly nonlinear limit state equations. Papaioannou et al. [18] introduced a new method for MCMC sampling in standard normal space for subset simulation, and this method is simpler than previous algorithms while maintaining the same accuracy and efficiency.
Inspection and maintenance interventions are common practices to improve the safety and reliability of structural systems [19,20]. Maintenance actions depend on inspection outcomes [11,21]. If the inspection result is ‘detection’, an immediate repair action will be carried out. By contrast, if the inspection result is ‘no detection’, no action will be performed. Inspection and maintenance should be planned rationally to satisfy the needs of owners. The optimal inspection and maintenance planning is to find a set of rules at each decision node, aiming at optimizing some indicators (e.g., cost minimization).
Numerous algorithms for this optimization problem have been proposed in the past years. In the 1950s, the founding algorithms were created in the field of operations research [22]. Later, Bellman [23] developed Markov decision processes (MDPs), providing a strong tool in asset management. However, there are many limitations in MDPs. One is that it assumes inspections always reflect the true state of the system with certainty [24]. To handle these limitations, partially observable Markov decision processes (POMDPs) were developed and became a more general algorithm. Faddoul et al. [25] presented generalized POMDPs that combine decision analysis and dynamic programming and applied it to maintenance optimization. Schöbi and Chatzi [26] described an enhanced variant of POMDPs for the maintenance planning of infrastructure, providing a framework to address non-linearities in action models. MDPs and POMDPs are suitable for simple systems consisting of a limited number of components. However, when the number of components is large, they become impractical because the Markovian transition matrices become extremely large. To this end, Andriotis and Papakonstantinou [27] proposed a deep reinforcement learning (DRL) algorithm to establish effective life cycle strategies for large multi-component systems.
Regarding the objective function of this optimization problem, most studies focus on cost/ risk minimization. Garbatov and Guedes Soares [28] presented a maintenance cost minimization method for floating structures and investigated the effect of different parameters on maintenance cost. Straub and Faber [29] proposed an approach that considers the entire system in risk-based inspection (RBI) planning and discussed the dependencies among system components. Nielsen and Sørensen [30] evaluated the cost of inspections, repairs, and lost production for a single wind turbine and assessed the impact of parameters such as failure rate and inspection interval on costs. Luque and Straub [13] presented an integral risk-based optimization method for structural system inspections using the DBN framework, which allows for the rapid calculation of system reliability based on inspection results. Given that risk is the multiplication of cost and corresponding probability, risk optimization is also a kind of cost optimization. Thus, previous studies mainly focus on cost optimization.
For large offshore structures such as offshore platforms, offshore wind farms, etc., reliability performance may be more important than cost. However, a few studies considered reliability growth [31], and these studies were applied at the component level. In engineering practice, inspection planning is generally aimed at structural systems. Therefore, it is necessary to develop a system-level reliability growth (SRG) model.
In this study, an SRG model is proposed for the first time and applied to efficient inspection of offshore wind farms. This study is novel in the following aspects: (a) SRG is defined as an increase in the percentage of system-level reliability index with and without inspection. This novel definition emphasizes the importance of system-level reliability performance and provides another way of assessing the value of inspections. (b) A framework to quantify the SRG of structural systems is proposed, which also works for a single component. (c) It is interesting to establish the law of diminishing SRG, i.e., the maximum SRG gained from one additional inspection decreases as the number of inspections increases. (d) The characteristics of expected posterior failure probability and the mechanism of optimal inspection time are revealed. In summary, theoretically, the proposed SRG model evaluates system reliability performance from a new perspective, promoting the development of system reliability theory. In practice, this model can be used to develop efficient inspection planning, in order to enhance the reliability and economic performance of offshore wind farms and contribute to the sustainable development of clean energy.
The rest of this study is arranged as follows. Section 2 introduces the fatigue damage assessment model of the monopile OWT. Section 3 presents the SRG model based on system-level reliability growth, including prior reliability analysis by S (i.e., stress range)-N (i.e., fatigue life) approach, posterior reliability analysis by Bayesian inference, and the quantification method of SRG. Section 4 describes the optimization method to obtain the optimal inspection planning based on SRG. Section 5 gives the results and discussion of the optimal inspection planning for offshore wind farms. Section 6 shows some important conclusions.

2. Fatigue Damage Assessment Model

In this study, the offshore wind farm is a series system that consists of 15 OWTs (see Figure 1). These OWTs are subjected to multiple loadings from wind, waves, and currents. The water depth is 25 m, which is relatively shallow, so the monopile foundation is adopted to support each offshore wind turbine. The monopile support structure is composed of a transition piece and a monopile. In the construction of this monopile support structure, steel plates are rolled and then horizontally and vertically welded. Accordingly, a single OWT, which is simplified as a monopile support structure, is defined as a component of the system.
For the i th ( i = 1,2 , . . . , 15 ) OWT, when the long-term stress range q i is described by a Weibull distribution, and a one-slope S-N curve is used. The accumulated fatigue damage of the monopile support structure D i is given by [32] as follows:
D i = N 0 a ¯ q i m Γ ( 1 + m h )
where N 0 is the total number of loading cycles during the design service life; q i is the scale parameter of Weibull stress range distribution; h is the shape parameter of Weibull stress range distribution; Γ ( ) is the Gamma function; m is the inverse negative slope of S-N curve; a ¯ is the intercept of the S-N curve with the logarithmic N-axis. If the accumulated fatigue damage follows the Palmgren–Miner rule [33,34] and the annual number of loading cycles N a is a constant over time t years, the accumulated fatigue damage of i th monopile support structure after t years D i , t can be obtained as follows:
D i , t = N a × t a ¯ q i m Γ ( 1 + m h )

3. SRG Model

3.1. Prior Reliability

As stated earlier, the monopile support structures of OWTs are considered to be system components. By using the S-N approach, the limit state function of i th component g i ( X ) is described as follows:
g i ( X ) = D c N a × t a ¯ A 4 , i ( A 1 , i A 2 , i A 3 , i ) m q i m Γ ( 1 + m h )
where D c is the critical accumulated fatigue damage (i.e., the criterion for component failure); A 1 represents the uncertainty owing to environmental loadings; A 2 represents the uncertainty related to the stress concentration factor; A 3 represents the uncertainty in the mathematical model for fatigue damage assessment; A 4 represents the uncertainty associated with material; X is the vector of basic random variables.
All the parameters (referenced from [32,35,36]) in the limit state function are presented in Table 1. Note that the correlation among these components should be considered. In the present study, it is described by the correlation coefficients of the parameters in the limit state function (i.e., ρ A 1 , ρ A 2 , ρ A 3 , ρ A 4 , ρ l n q ). The Gaussian copula (i.e., Nataf transformation) is used to model the joint distribution of these parameters [35,36], and the correlation coefficients of these parameters are set to be ρ A 1 = 0.5 , ρ A 2 = 0.6 , ρ A 3 = 0.8 , ρ A 4 = 0.6 , ρ l n q = 0.8 .
The failure probability of i th component is hence given by the following:
P f , c , i = P [ g i ( X ) 0 ]
Because the offshore wind farm is assumed to be a series system, the failure of any one component will lead to system failure. The event of system failure E can be defined as follows:
E = E 1 E 2 . . . E 15 = i = 1 15 E i
where E i is the event modeling failure of component i . The failure probability of the system P f , s is given by the following:
P f , s = P ( i = 1 15 E i ) = P ( i = 1 15 [ g i ( X ) 0 ] )
If MCS with N samples is employed, Equation (6) can be rewritten as follows [37]:
P f , s = 1 N j = 1 N I m i n g i ( X ) 0
where I m i n g i ( X ) 0 is an indicator function equal to 1 if m i n g i ( X ) 0 and otherwise equal to zero. The coefficient of variation of MCS is expressed as follows [38]:
C o V = 1 P f , s P f , s N 1 P f , s N
The reliability index β can be computed by the probability of failure:
β = Φ 1 ( P f )
where Φ 1 ( · ) is the inverse standard normal cumulative distribution function (CDF).
If all the limit state functions are linearized at their design β -points, respectively, and the first-order reliability method is employed, the failure probability of the system P f , s is expressed as follows [32]:
P f , s = 1 Φ k β , ρ
where Φ k is k-dimensional (k = 15) standard normal CDF, β is the vector of design β -points, and ρ is the correlation coefficient matrix between two linearized safety margins.
Note that this reliability index is named ‘prior reliability index’ because inspections are not considered. In the next Section, Bayesian inference is used to update the reliability index considering inspection outcomes, and the updated reliability index is named the ‘conditional posterior reliability index’.

3.2. Posterior Reliability

Surveyors are required to inspect the offshore wind farm to assess the safety of monopile support structures over the service life. These inspections generate data on the structural condition, and this information helps improve the reliability of the whole offshore wind farm, as well as enhance people’s confidence in offshore wind power projects. Therefore, it is important to develop a model to update the system reliability according to the inspection outcomes.

3.2.1. Event Tree Model

The event tree model, which shows all possible outcomes, is employed for the offshore wind farm. Figure 2 presents the event tree analysis when two inspections are conducted. t n is n th ( n = 1,2 ) inspection time, and t e is the end time of the service life, which is equal to 20. ‘F’, ‘D’, ‘R’, and ‘N’ represents ‘failure’, ‘detection’, ‘repair action’, and ‘no action’, respectively, and ‘ F ¯ ’ and ‘ D ¯ ’ represents ‘survival’ and ‘no detection’, respectively. Consider the first inspection because the inspection planning is established before inspection; whether the visited OWT failed or not is unknown at t 1 . If the visited OWT fails, this failure will be easily recognized without inspection, and this failed OWT cannot be restored through repair, i.e., it remains in a failed state. If the visited OWT is not failed, and the inspection outcome is ‘no detection’, there will be no maintenance action. If the visited OWT is not failed, and the inspection outcome is ‘detection’, a timely repairment will be performed. Thus, there are three branches (‘F’, ‘ F ¯ & D & R ’, ‘ F ¯ & D ¯ & N ’) for the first inspection. When it comes to the second inspection, the first inspection outcome should be known as ‘detection’ or ‘no detection’. Following the same considerations of the first inspection, there are also three branches after branch ‘ F ¯ & D & R ’ or ‘ F ¯ & D ¯ & N ’ for the second inspection.

3.2.2. Probability of Detection Model

Non-destructive inspection methods play a crucial role in assessing the integrity of structures without causing permanent damage. However, it is important to acknowledge that the imperfection of inspection methods introduces some level of uncertainty in the evaluation. Quantifying the uncertainty associated with the quality of an inspection method is essential for ensuring the reliability and accuracy of the inspection results. In this study, the probability of detection P d for the given accumulated fatigue damage D is given by [35,39]:
P d = 1 e x p ( D / η )
where η is the inspection quality, which can be interpreted as the mean value of detectable fatigue damage size. The bigger the value of η , the lower the inspection quality. The inspection quality η is set to be 0.1 and 0.3 [11,35]. The relationships between the probability of detection and accumulated fatigue damage for two inspection methods with η = 0.1 and η = 0.3 are shown in Figure 3.

3.2.3. Reliability Updating

Bayesian inference is a powerful tool to update prior distributions based on new information [40,41]. Consider the previously mentioned event tree model, there are three branches at each chance node. Let B 1 represent branch ‘F’, B 2 represent branch ‘ F ¯ & D & R ’, and B 3 represent branch ‘ F ¯ & D ¯ & N ’, and B is the set of B 1 , B 2 , and B 3 . Each branch denotes a possible event, which can be regarded as new information. The conditional failure probability of the system can be updated as follows:
P ( E | B = B k ) = P ( E B = B k ) P ( B = B k )   for   k = 1,2 , 3
The conditional reliability index corresponding to this failure probability can also be updated according to Equation (9).

3.3. SRG Computation

In this study, SRG is defined as the percentage increase in system-level reliability index with and without inspection. Consider the first inspection, the conditional system-level reliability growth C S R G is given as follows:
C S R G = β | ( B = B k ) β β × 100 %   for   k = 1,2 , 3
where β is the prior reliability index; β | ( B = B k ) is the conditional posterior reliability index for a given branch. Taking all the branches into consideration, SRG is expressed as follows:
S R G = β β β × 100 % = Φ 1 [ E ( P f ) ] β β × 100 %
where E ( P f ) is the expected posterior failure probability considering all the branches, β is the posterior reliability index corresponding to E ( P f ) . Because the reliability index of the system is the smallest at the end time of the service life t e , it is necessary to focus on the reliability index at t e . Thus, the reliability index in Equation (14) refers to the reliability index at t e .

4. Optimal Inspection Planning Based on SRG Computation

Following Section 3.3 for the first inspection, the optimal inspection time can be obtained by maximizing SRG. Then, when the second inspection is considered, the first inspection is conducted at the optimal inspection time, and the first inspection outcome is known as B 2 or B 3 . Based on this inspection outcome, the reliability index can be updated. It is called the posterior reliability index after the first update, which will become the prior reliability index for the second update. In other words, the purpose of the known inspection outcome for the first inspection is to obtain the prior reliability index for the second update. After that, given all the branches ( B 1 , B 2 , and B 3 ) into consideration, the posterior reliability index can be obtained. Finally, SRG for the second inspection can be obtained, and the optimal second inspection time is when this SRG is maximized.
The procedure mentioned above can be repeated until the n th inspection. Generally, the optimal inspection planning is to solve an optimization problem by maximizing SRG for each inspection, and this problem can be described as follows:
Find   t j   j = 1,2 , , n
to   maximize   S R G j
such   that   t j + 1   t j  
given   n ,   η ,   X ,   and   B
where n is the number of inspections; t j is the j th inspection time; S R G j is the SRG corresponding to t j ; η is the inspection quality; X is the vector of basic parameters in the limit state function (see Table 1); B is the inspection outcome. Note that t j   ( 0 t j   20 ) is the design variable, S R G j is the objective function, n   ( n = 1,2 ) , η   ( 0.1   o r   0.3 ) , and B ( d e t e c t i o n o r n o d e t e c t i o n ) are deterministic variables, and X is the vector of basic random variables. According to Equations (1)–(18), for given n , η , X , and B , S R G j is a function of a single variable t j   . Thus, the exhaustive search algorithm is employed to solve this optimization problem, i.e., the S R G j corresponding to each (and all) inspection time t j   is calculated separately, and the time corresponding to the maximum S R G j is the optimal inspection time.

5. Results and Discussion

Consider two inspections and two different inspection qualities ( η = 0.1 and η = 0.3 ) are conducted for the offshore wind farm. The repairment effect is described by an ‘as good as new’ model [42,43]. It means that when the repair action is performed, the distribution of accumulated fatigue damage will recover to the initial condition. Three different inspection rules are defined as follows:
  • Inspection rule 1 (IR1). Each inspection is conducted for the same OWT (e.g., No. 1 OWT for two inspections). If the inspection outcome is ‘detection’, all the OWTs will be repaired;
  • Inspection rule 2 (IR2). Each inspection is conducted for the different OWTs (e.g., No. 1 OWT for the first inspection and No. 2 OWT for the second inspection). If the inspection outcome is ‘detection’, all the OWTs will be repaired;
  • Inspection rule 3 (IR3). Each inspection is conducted for the different OWTs (e.g., No. 1 OWT for the first inspection and No. 2 OWT for the second inspection). If the inspection outcome is ‘detection’, Only the inspected OWT will be repaired.
MCS has advantages in dealing with nonlinear limit state equations and correlations of random variables, and it has been widely used to calculate system reliability in previous studies [13,36,38]. MCS with 100,000 samples is mainly used to obtain the following results. The maximum system failure probability P f , s is in the order of 10 1 , and the coefficient of variation of MCS is in the order of 10 2 using Equation (8). Therefore, this sample size is appropriate.
This section is arranged as follows. Section 5.1 compares the calculations of the reliability index between MCS and FORM to verify the validity of the results of MCS. Section 5.2 compares the inspection efficiency of the offshore wind farm under different inspection strategies. Section 5.3 compares the inspection efficiency between a single OWT and the offshore wind farm. Section 5.4 conducts an in-depth analysis of the optimal inspection time. Section 5.5 makes some suggestions for future research.

5.1. Verification of Calculation Results by MCS and FORM

In Section 3.1, MCS and FORM are mentioned to calculate the prior system reliability index. Figure 4 shows the results of these two methods. It can be seen that the difference between MCS and FORM is sufficiently small. Thus, MCS is a reliable method to obtain the results in the following subsections.

5.2. Comparative Study of SRG under Different Inspection Strategies

5.2.1. SRG under IR1

For the first inspection, the SRG of the offshore wind farm with respect to the inspection time for η = 0.1 and η = 0.3 is shown in Figure 5. First of all, SRG increases to a maximum and then decreases with the inspection time so that the optimal inspection time can be determined: the 11th year for higher inspection quality and the 12th year for lower inspection quality. Then, the maximum SRG with higher inspection quality is greater than that with lower inspection quality. This is reasonable because higher inspection quality usually means higher inspection costs. If higher inspection quality does not gain more benefits in terms of SRG, the owner will not pay additional costs for higher inspection quality. Finally, it is checked that these two findings do not change for other inspection rules (IR2 and IR3) or the n th inspection ( n = 2 , 3 . . . ).
For the second inspection, the initial inspection time is exactly the optimal first inspection time. As mentioned earlier, the first inspection outcome has two possible events: ‘detection’ or ‘no detection’. If the first inspection outcome is ‘detection’, according to the assumption of IR1, the system condition will recover to its original state. SRG over the second inspection time can be obtained by using the same method as the first inspection, and its curve is shown in Figure 6. The optimal inspection times are the 16th year for higher inspection quality and the 17th year for lower inspection quality. On the other hand, if the first inspection outcome is ‘no detection’, SRG over the second inspection time is shown in Figure 7. The optimal inspection times are also the 16th year for higher inspection quality and the 17th year for lower inspection quality.

5.2.2. SRG under IR2

Figure 8, Figure 9 and Figure 10 show the SRG under IR2. For the first inspection, SRG under IR2 (see Figure 8) is the same as that under IR1 (see Figure 5) because of the same inspection object (No. 1 OWT) and same repair action (i.e., if the inspection outcome is ‘detection’, all the OWTs will be repaired).
Consider the second inspection; if the first inspection outcome is ‘detection’, SRG under IR2 (see Figure 9) is similar to that under IR1 (see Figure 6) because the system condition recovers to its original state after the first inspection. This means that the system condition is the same under IR1 and IR2 before the second inspection is conducted. Although the inspection object is different (i.e., No.1 OWT under IR1 and No.2 OWT under IR2), the parameters of these two OWTs are identically distributed. As a result, the difference in SRG calculations under IR1 and IR2 is minimal.
However, if the second inspection outcome is ‘no detection’, SRG under IR2 (see Figure 10) is much different from that under IR1 (see Figure 7). First, SRG at the initial inspection time is larger than 0 under IR2, while it is equal to 0 under IR1. The reason is that at this time, the first inspection has been conducted on No.1 OWT. If the second inspection is also conducted on No. 1 OWT, no additional information will result in zero SRG. But if the second inspection is conducted on No. 2 OWT at this time, the system will gain new information, so SRG is larger than 0.
Second, comparing Figure 7 and Figure 10, the maximum SRG under IR2 (14 for η = 0.1 and 8.3 for η = 0.3 ) is more than twice as much as that under IR1 (5.7 for η = 0.1 and 3.4 for η = 0.3 ). One possible explanation is that under IR1, two inspections are conducted on the same object (No. 1 OWT), leading to higher information redundancy and lower SRG. This implies that changing the inspection object is a more efficient way to increase the system reliability for the second inspection. It can be expected that the difference in inspection efficiency between IR2 and IR1 grows as the number of inspections increases.

5.2.3. SRG under IR3

Figure 11, Figure 12 and Figure 13 show the SRG under IR3. Comparing Figure 8 and Figure 11, for the first inspection, the maximum SRG under IR2 is larger than that under IR3, and the difference is huge for the first inspection (e.g., 112 versus 10.8 for η = 0.1 ). It is checked that this finding works for the n th ( n = 2 , 3 . . . ) inspection. This indicates that the inspection efficiency of IR2 is higher than that of IR3. In other words, when inspecting a ‘detection’ outcome, overall repairing the system is a more efficient way to increase the system reliability than just repairing the inspected object. The reason is that the offshore wind farm is regarded as a series system. Under IR3, if the inspection outcome is ‘detection’, the reliability of the inspected OWT can be enhanced through repair, but the unrepaired OWTs are still the weak links of the system, resulting in a lower SRG.
In summary, from the perspective of gaining the maximum SRG, IR2 has the highest inspection efficiency compared to IR1 and IR3. If the inspection quality is higher, the inspection efficiency is higher. In other words, increasing the number of repair objects when inspecting a ‘detection’ outcome, changing the inspection object for each inspection, and increasing the inspection quality can improve inspection efficiency. This finding is important for structural system reliability engineering. It highlights the need for high-quality, high-coverage inspections and comprehensive repairments to improve the system reliability.

5.2.4. The Maximum SRG with Respect to the Number of Inspections

Table 2 shows the maximum SRG under IR1, IR2, and IR3. For a given inspection quality, the maximum SRG for the first inspection is larger than that for the second inspection. In other words, as the number of inspections increases, the maximum SRG gained from one additional inspection (which can be defined as marginal SRG) decreases. This is in line with the law of diminishing marginal utility in economics and can help find the optimal number of inspections. For the first inspection, the benefits (i.e., maximum SRG) are usually larger than the costs. As the number of inspections increases, the marginal benefits may be smaller than the marginal costs owing to the diminishing marginal benefits, resulting in unprofitability. By weighing the benefits against costs, the optimal number of inspections is the one at which the marginal benefits are equal to the marginal costs.

5.3. Differences between a Single OWT and the Offshore Wind Farm

When the number of components is equal to one, the SRG of the offshore wind farm under IR1 denotes the SRG of a single OWT under IR1. For comparison, Table 3 presents the maximum SRG for the system (15 OWTs) and a single OWT under IR1. It can be seen that the law of diminishing marginal SRG still works at the component level. The difference is that this diminishing rate of the system is faster than that of a single OWT. One possible explanation is that the system is more difficult to maintain and manage due to complex component interactions and system integration issues, resulting in a sharp reduction in its potential for reliability growth.

5.4. In-Depth Analysis of the Optimal Inspection Time

5.4.1. Characteristics of Expected Posterior Failure Probability

Recall Equation (14): to maximize S R G j , β needs to be maximized, or the expected posterior failure probability E ( P f ) needs to be minimized. Thus, the characteristic of E ( P f ) is discussed in this section. According to the event tree model in Section 3.2.1, E ( P f ) at the end time of the service life (i.e., the 20th year) E [ P f ( 20 ) ] can be expressed as follows:
E [ P f ( 20 ) ] = P ( B = B 1 ) P [ E ( 20 ) | B = B 1 ] + P ( B = B 2 ) P [ E ( 20 ) | B = B 2 ] + P ( B = B 3 ) P [ E ( 20 ) | B = B 3 ]
Note that Equation (19) is a function of inspection time t ( 0 t 20 ). Consider the first term of the right side of this Equation for a given inspection time because B 1 represents the system condition is ‘failure’ at this time, P ( B = B 1 ) = P f ( t ) , and the system must fail at the 20th-year condition on B 1 , i.e., P [ E ( 20 ) | B = B 1 ] =1. Let E [ P f ( 20 ) ] be decomposed into two components P f , 1 ( t ) and P f , 2 ( t ) , then this Equation can be rewritten as follows:
E [ P f ( 20 ) ] = P f , 1 ( t ) + P f , 2 ( t )
where P f , 1 ( t ) and P f , 2 ( t ) are defined as follows:
P f , 1 ( t ) = P ( B = B 1 ) P [ E ( 20 ) | B = B 1 ] = P f ( t )
P f , 2 ( t ) = P ( B = B 2 ) P [ E ( 20 ) | B = B 2 ] + P ( B = B 3 ) P [ E ( 20 ) | B = B 3 ]
Clearly, Equation (21) indicates that the first component P f , 1 ( t ) is equal to the prior failure probability, which reflects the prior physical model, and Equation (22) indicates that the second component P f , 2 ( t ) is defined as the posterior failure probability considering inspection information. This is exactly the characteristic of the expected posterior failure probability: a probabilistic model that incorporates prior physical model and inspection data. It theoretically explains the nature of the expected posterior failure probability.
Figure 14 shows the trend of P f , 1 ( t ) and P f , 2 ( t ) under IR2 for the first inspection. As can be seen, the first component P f , 1 ( t ) increases as the inspection time increases. This is a negative effect on the expected posterior failure probability, which reflects the natural increase in the failure probability without inspection. However, the second component P f , 2 ( t ) decreases as the inspection time increases. This is a positive effect on the expected posterior failure probability, which reflects the reduction effect of inspection on the failure probability. Then, P f , 1 ( t ) is smaller than P f , 2 ( t ) at the early stage but larger at the later stage. This implies that P f , 1 ( t ) plays a dominant role in the early period, P f , 2 ( t ) in the later period. As a result, the sum of the two is likely to be smallest in the medium period.

5.4.2. Mechanism of the Optimal Inspection Time

The optimal inspection time is when SRG is maximized, or the expected posterior failure probability is minimized. Therefore, the first-order derivative of Equation (20) should be zero at that time, i.e.,
d E [ P f ( 20 ) ] d t = d P f ( t ) d t + d P f , 2 ( t ) d t = V 1 V 2 = 0
where V 1 is the increasing rate of the prior failure probability P f ( t ) , V 2 is the decreasing rate of the posterior failure probability considering inspection data P f , 2 ( t ) . Note that V 1 and V 2 are both positive, ‘−’ in front of V 2 represents a decreasing trend of P f , 2 ( t ) . Accordingly, the optimal inspection time t o p can be expressed as follows:
t o p = t | V 1 = V 2
Equation (24) reveals the mechanism of the optimal inspection time: it is the time at which the increasing rate of the prior failure probability is equal to the decreasing rate of the posterior failure probability considering inspection data.
To obtain the value of V 1 and V 2 , a central-difference formula is adopted as follows:
V 1 ( t ) = P f , 1 ( t + 1 ) P f , 1 ( t 1 ) 2
V 2 ( t ) = P f , 2 ( t + 1 ) P f , 2 ( t 1 ) 2
Figure 15 shows the trend of V 1 and V 2 under IR2 for the first inspection. There are two critical points, P1 (horizontal coordinate 11.2) and P2 (horizontal coordinate 11.7), which lie exactly at the optimal inspection times for η = 0.1 and η = 0.3 , respectively. 11.2 and 11.7 are rounded to be 11 and 12, respectively. This agrees with the results of Section 5.2.2. Moreover, V 1 shows a monotonically increasing trend, which agrees with the law of fatigue crack propagation: the growth rate of fatigue crack is very small at the early stage and then becomes faster and faster. In addition, V 2 is larger than V 1 at the early stage but smaller at the later stage. For this reason, the intersection of V 1 and V 2 is likely to be at the medium stage, i.e., the optimal inspection time is near the midpoint of the remaining service life.
Note that Section 5.3 focuses on the first inspection under IR2 (the most efficient inspection rule). It is checked that these results hold for other inspection rules (IR1 and IR3) and the n th ( n = 2 , 3 . . . ) inspection.

5.5. Suggestions for Future Research

While this study primarily concentrated on determining the optimal inspection time, it can be extended to consider other performance indicators. For example, researchers can optimize the number of OWTs to be inspected at once. This entails determining, within resource constraints, the ideal quantity of OWTs to maximize the performance and reliability of the entire wind farm. Such studies will contribute to more holistic inspection optimization strategies, enhancing the long-term sustainability and economic viability of offshore wind farms.

6. Conclusions

This study, for the first time, develops an approach to establish the optimal inspection planning for offshore wind farms based on an SRG model. SRG is defined as the percentage increase of the system-level expected posterior reliability index compared to the prior reliability index. The prior reliability index is calculated by the S-N method integrated with the theory of system reliability. The expected posterior reliability index is obtained by Bayesian inference. The optimal inspection planning is to solve an optimization problem by maximizing SRG. Three different inspection rules (IR1, IR2, and IR3) are defined and the efficiency of these inspection rules is compared. The main conclusions are as follows:
  • Enhancing the inspection efficiency (in terms of maximum SRG) can be achieved by comprehensively repairing in response to a ‘detection’ inspection outcome, changing the inspection object for each inspection, and improving the inspection quality. This identifies ways to improve the inspection efficiency of offshore wind farms, providing a basis for the decision-making of asset owners and managers;
  • When comparing the maximum SRG of the inspection strategies of a single OWT and the offshore wind farm under IR1, both obey the law of diminishing marginal system-level reliability growth, but the diminishing rate of a single OWT is slower than that of the offshore wind farm. Note that the SRG model of a single OWT is a special case of the offshore wind farms under IR1;
  • It is interesting to determine the law of diminishing marginal SRG, i.e., the maximum SRG gained from one additional inspection decreases as the number of inspections increases. This emphasizes the need to find the optimal number of inspections by weighing the benefits against costs rather than simply increasing the number of inspections;
  • The expected posterior failure probability can be decomposed into two components. The first component is exactly the prior failure probability, which reflects the prior physical model, while the second component is the posterior failure probability considering inspection data, which reflects the inspection action;
  • The optimal inspection time is when the increasing rate of the first component of the expected posterior failure probability is equal to the decreasing rate of the second component. This time is near the midpoint of the remaining service life;
  • The innovative index (SRG) highlights the importance of reliability performance, which is the focus of large offshore structures such as offshore platforms, offshore wind farms, etc. Moreover, this index is dimensionless, making comparative studies under different inspection strategies easier.

Author Contributions

Conceptualization, G.Z.; methodology, L.L.; software, L.L.; validation, G.Z.; formal analysis, L.L.; investigation, L.L.; resources, G.Z.; data curation, L.L.; writing—original draft preparation, L.L.; writing—review and editing, L.L. and G.Z.; visualization, L.L.; supervision, G.Z.; project administration, G.Z.; funding acquisition, G.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Department of Science and Technology of Guangdong Province (No. 2023A1515240057) and the Southern University of Science and Technology (No. Y01316134).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zou, G.; Faber, M.H.; González, A.; Banisoleiman, K. A holistic approach to risk-based decision on inspection and design of fatigue-sensitive structures. Eng. Struct. 2020, 221, 110949. [Google Scholar] [CrossRef]
  2. Cheng, A.; Chen, N.-Z.; Pu, Y.; Yu, J. Fatigue crack growth prediction for small-scale yielding (SSY) and non-SSY conditions. Int. J. Fatigue 2020, 139, 105768. [Google Scholar] [CrossRef]
  3. Huang, W.; Garbatov, Y.; Guedes Soares, C. Fatigue reliability assessment of a complex welded structure subjected to multiple cracks. Eng. Struct. 2013, 56, 868–879. [Google Scholar] [CrossRef]
  4. Li, L.; Zou, G. A novel computational approach for assessing system reliability and damage detection delay: Application to fatigue deterioration in offshore structures. Ocean. Eng. 2024, 297, 117023. [Google Scholar] [CrossRef]
  5. Frangopol, D.M.; Soliman, M. Life-cycle of structural systems: Recent achievements and future directions. Struct. Infrastruct. Eng. 2016, 12, 1–20. [Google Scholar] [CrossRef]
  6. Moan, T. Life-cycle assessment of marine civil engineering structures. Struct. Infrastruct. Eng. 2011, 7, 11–32. [Google Scholar] [CrossRef]
  7. Zou, G.; Banisoleiman, K.; González, A.; Faber, M.H. Probabilistic investigations into the value of information: A comparison of condition-based and time-based maintenance strategies. Ocean. Eng. 2019, 188, 106181. [Google Scholar] [CrossRef]
  8. Volkanovski, A.; Čepin, M.; Mavko, B. Application of the fault tree analysis for assessment of power system reliability. Reliab. Eng. Syst. Safe 2009, 94, 1116–1127. [Google Scholar] [CrossRef]
  9. Ding, R.; Liu, Z.; Xu, J.; Meng, F.; Sui, Y.; Men, X. A novel approach for reliability assessment of residual heat removal system for HPR1000 based on failure mode and effect analysis, fault tree analysis, and fuzzy Bayesian network methods. Reliab. Eng. Syst. Safe 2021, 216, 107911. [Google Scholar] [CrossRef]
  10. Lu, N.; Liu, Y.; Beer, M. System reliability evaluation of in-service cable-stayed bridges subjected to cable degradation. Struct. Infrastruct. Eng. 2018, 14, 1486–1498. [Google Scholar] [CrossRef]
  11. Kim, S.; Frangopol, D.M. Optimum inspection planning for minimizing fatigue damage detection delay of ship hull structures. Int. J. Fatigue 2011, 33, 448–459. [Google Scholar] [CrossRef]
  12. Moan, T. Reliability-based management of inspection, maintenance and repair of offshore structures. Struct. Infrastruct. Eng. 2005, 1, 33–62. [Google Scholar] [CrossRef]
  13. Luque, J.; Straub, D. Risk-based optimal inspection strategies for structural systems using dynamic Bayesian networks. Struct. Saf. 2019, 76, 68–80. [Google Scholar] [CrossRef]
  14. Straub, D. Stochastic Modeling of Deterioration Processes through Dynamic Bayesian Networks. J. Eng. Mech. 2009, 135, 1089–1099. [Google Scholar] [CrossRef]
  15. Groden, M.; Collette, M. Fusing fleet in-service measurements using Bayesian networks. Mar. Struct. 2017, 54, 38–49. [Google Scholar] [CrossRef]
  16. Hackl, J.; Kohler, J. Reliability assessment of deteriorating reinforced concrete structures by representing the coupled effect of corrosion initiation and progression by Bayesian networks. Struct. Saf. 2016, 62, 12–23. [Google Scholar] [CrossRef]
  17. Song, S.; Lu, Z.; Qiao, H. Subset simulation for structural reliability sensitivity analysis. Reliab. Eng. Syst. Safe 2009, 94, 658–665. [Google Scholar] [CrossRef]
  18. Papaioannou, I.; Betz, W.; Zwirglmaier, K.; Straub, D. MCMC algorithms for Subset Simulation. Probab. Eng. Mech. 2015, 41, 89–103. [Google Scholar] [CrossRef]
  19. Bismut, E.; Straub, D. Optimal adaptive inspection and maintenance planning for deteriorating structural systems. Reliab. Eng. Syst. Safe 2021, 215, 107891. [Google Scholar] [CrossRef]
  20. Straub, D.; Faber, M.H. Computational aspects of risk-based inspection planning. Comput.-Aided. Civ. Inf. 2006, 21, 179–192. [Google Scholar] [CrossRef]
  21. Kim, S.; Frangopol, D.M. Cost-based optimum scheduling of inspection and monitoring for fatigue-sensitive structures under uncertainty. J. Struct. Eng. 2011, 137, 1319–1331. [Google Scholar] [CrossRef]
  22. Howard, R.A. Dynamic Programming and Markov Processes; Technology Press and Wiley: New York, NY, USA, 1960. [Google Scholar]
  23. Bellman, R. A Markovian Decision Process. J. Math. Mech. 1957, 6, 679–684. [Google Scholar] [CrossRef]
  24. Papakonstantinou, K.G.; Shinozuka, M. Optimum inspection and maintenance policies for corroded structures using partially observable Markov decision processes and stochastic, physically based models. Probab. Eng. Mech. 2014, 37, 93–108. [Google Scholar] [CrossRef]
  25. Faddoul, R.; Raphael, W.; Chateauneuf, A. A generalised partially observable Markov decision process updated by decision trees for maintenance optimisation. Struct. Infrastruct. Eng. 2011, 7, 783–796. [Google Scholar] [CrossRef]
  26. Schöbi, R.; Chatzi, E.N. Maintenance planning using continuous-state partially observable Markov decision processes and non-linear action models. Struct. Infrastruct. Eng. 2016, 12, 977–994. [Google Scholar] [CrossRef]
  27. Andriotis, C.P.; Papakonstantinou, K.G. Managing engineering systems with large state and action spaces through deep reinforcement learning. Reliab. Eng. Syst. Safe 2019, 191, 106483. [Google Scholar] [CrossRef]
  28. Garbatov, Y.; Guedes Soares, C. Cost and reliability based strategies for fatigue maintenance planning of floating structures. Reliab. Eng. Syst. Safe 2001, 73, 293–301. [Google Scholar] [CrossRef]
  29. Straub, D.; Faber, M.H. Risk based inspection planning for structural systems. Struct. Saf. 2005, 27, 335–355. [Google Scholar] [CrossRef]
  30. Nielsen, J.J.; Sørensen, J.D. On risk-based operation and maintenance of offshore wind turbine components. Reliab. Eng. Syst. Safe 2011, 96, 218–229. [Google Scholar] [CrossRef]
  31. Zou, G.; Kolios, A. Quantifying the value of negative inspection outcomes in fatigue maintenance planning: Cost reduction, risk mitigation and reliability growth. Reliab. Eng. Syst. Safe 2022, 226, 108668. [Google Scholar] [CrossRef]
  32. Yeter, B.; Garbatov, Y.; Guedes Soares, C. Risk-based maintenance planning of offshore wind turbine farms. Reliab. Eng. Syst. Safe 2020, 202, 107062. [Google Scholar] [CrossRef]
  33. Palmgren, A. Die lebensdauer von kugellagern. Z.VDI. 1924, 68, 339–341. [Google Scholar]
  34. Miner, M.A. Cumulative damage in fatigue. J. Appl. Mech. 1945, 12, 159–164. [Google Scholar] [CrossRef]
  35. Luque, J.; Straub, D. Reliability analysis and updating of deteriorating systems with dynamic Bayesian networks. Struct. Saf. 2016, 62, 34–46. [Google Scholar] [CrossRef]
  36. Schneider, R.; Thöns, S.; Straub, D. Reliability analysis and updating of deteriorating systems with subset simulation. Struct. Saf. 2017, 64, 20–36. [Google Scholar] [CrossRef]
  37. Zhang, M. Structural Reliability Analysis: Methods and Procedures (in Chinesse); Science Press: Beijing, China, 2009; p. 147. [Google Scholar]
  38. Straub, D.; Schneider, R.; Bismut, E.; Kim, H.-J. Reliability analysis of deteriorating structural systems. Struct. Saf. 2020, 82, 101877. [Google Scholar] [CrossRef]
  39. Zou, G.; Faber, M.H.; González, A.; Banisoleiman, K. Fatigue inspection and maintenance optimization: A comparison of information value, life cycle cost and reliability based approaches. Ocean. Eng. 2021, 220, 108286. [Google Scholar] [CrossRef]
  40. Ge, B.; Kim, S. Probabilistic service life prediction updating with inspection information for RC structures subjected to coupled corrosion and fatigue. Eng. Struct. 2021, 238, 112260. [Google Scholar] [CrossRef]
  41. Zárate, B.A.; Caicedo, J.M.; Yu, J.; Ziehl, P. Bayesian model updating and prognosis of fatigue crack growth. Eng. Struct. 2012, 45, 53–61. [Google Scholar] [CrossRef]
  42. Zitrou, A.; Bedford, T.; Daneshkhah, A. Robustness of maintenance decisions: Uncertainty modelling and value of information. Reliab. Eng. Syst. Safe 2013, 120, 60–71. [Google Scholar] [CrossRef]
  43. Zou, G.; Faber, M.H.; González, A.; Banisoleiman, K. A simplified method for holistic value of information computation for informed structural integrity management under uncertainty. Mar. Struct. 2021, 76, 102888. [Google Scholar] [CrossRef]
Figure 1. Offshore wind farm.
Figure 1. Offshore wind farm.
Jmse 12 01140 g001
Figure 2. Event tree analysis when the number of inspections is 2.
Figure 2. Event tree analysis when the number of inspections is 2.
Jmse 12 01140 g002
Figure 3. Probability of detection for two inspection methods (i.e., η = 0.1 and η = 0.3 ).
Figure 3. Probability of detection for two inspection methods (i.e., η = 0.1 and η = 0.3 ).
Jmse 12 01140 g003
Figure 4. Prior system reliability index versus service time.
Figure 4. Prior system reliability index versus service time.
Jmse 12 01140 g004
Figure 5. SRG of the offshore wind farm versus the first inspection time under IR1.
Figure 5. SRG of the offshore wind farm versus the first inspection time under IR1.
Jmse 12 01140 g005
Figure 6. SRG of the offshore wind farm versus the second inspection time for a ‘detection’ outcome of the first inspection under IR1.
Figure 6. SRG of the offshore wind farm versus the second inspection time for a ‘detection’ outcome of the first inspection under IR1.
Jmse 12 01140 g006
Figure 7. SRG of the offshore wind farm versus the second inspection time for a ‘no detection’ outcome of the first inspection under IR1.
Figure 7. SRG of the offshore wind farm versus the second inspection time for a ‘no detection’ outcome of the first inspection under IR1.
Jmse 12 01140 g007
Figure 8. SRG of the offshore wind farm versus the first inspection time under IR2.
Figure 8. SRG of the offshore wind farm versus the first inspection time under IR2.
Jmse 12 01140 g008
Figure 9. SRG of the offshore wind farm versus the second inspection time for a ‘detection’ outcome of the first inspection under IR2.
Figure 9. SRG of the offshore wind farm versus the second inspection time for a ‘detection’ outcome of the first inspection under IR2.
Jmse 12 01140 g009
Figure 10. SRG of the offshore wind farm versus the second inspection time for a ‘no detection’ outcome of the first inspection under IR2.
Figure 10. SRG of the offshore wind farm versus the second inspection time for a ‘no detection’ outcome of the first inspection under IR2.
Jmse 12 01140 g010
Figure 11. SRG of the offshore wind farm versus the first inspection time under IR3.
Figure 11. SRG of the offshore wind farm versus the first inspection time under IR3.
Jmse 12 01140 g011
Figure 12. SRG of the offshore wind farm versus the second inspection time for a ‘detection’ outcome of the first inspection under IR3.
Figure 12. SRG of the offshore wind farm versus the second inspection time for a ‘detection’ outcome of the first inspection under IR3.
Jmse 12 01140 g012
Figure 13. SRG of the offshore wind farm versus the second inspection time for a ‘no detection’ outcome of the first inspection under IR3.
Figure 13. SRG of the offshore wind farm versus the second inspection time for a ‘no detection’ outcome of the first inspection under IR3.
Jmse 12 01140 g013
Figure 14. Two components of expected posterior failure probability versus the first inspection time under IR2.
Figure 14. Two components of expected posterior failure probability versus the first inspection time under IR2.
Jmse 12 01140 g014
Figure 15. Increasing/decreasing rate of two components of expected posterior failure probability versus the first inspection time under IR2.
Figure 15. Increasing/decreasing rate of two components of expected posterior failure probability versus the first inspection time under IR2.
Jmse 12 01140 g015
Table 1. Parameters in the limit state function.
Table 1. Parameters in the limit state function.
Random VariableDistributionUnitsMeanStd. Deviation
D c Lognormal-10.2
A 1 , i Normal-10.15
A 2 , i Normal-10.1
A 3 , i Normal-10.05
A 4 , i Lognormal-10.2
l n q i NormalCorresponding to N / m m 2 1.80.22
a ¯ Deterministic- 5.81 × 10 11
m Deterministic-3
h Deterministic-1
N a Deterministiccycles/year 4 × 10 6
Table 2. Maximum SRG (%) under IR1, IR2 and IR3.
Table 2. Maximum SRG (%) under IR1, IR2 and IR3.
1st Inspection 2 nd   Inspection   ( D ¯ ) a2nd Inspection (D) b
η = 0.1 η = 0.3 η = 0.1 η = 0.3 η = 0.1 η = 0.3
IR1112.059.25.73.412.14.6
IR2112.059.214.08.312.14.6
IR310.89.35.04.95.75.6
a  D ¯ means the previous inspection outcome is ‘no detection’. b D means the previous inspection outcome is ‘detection’.
Table 3. Maximum SRG (%) for the system and a single OWT.
Table 3. Maximum SRG (%) for the system and a single OWT.
1st Inspection 2 nd   Inspection   ( D ¯ ) a2nd Inspection (D) b
η = 0.1 η = 0.3 η = 0.1 η = 0.3 η = 0.1 η = 0.3
System112.059.25.73.412.14.6
OWT33.825.413.76.628.816.3
a  D ¯ means the previous inspection outcome is ‘no detection’. b D means the previous inspection outcome is ‘detection’.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, L.; Zou, G. A System-Level Reliability Growth Model for Efficient Inspection Planning of Offshore Wind Farms. J. Mar. Sci. Eng. 2024, 12, 1140. https://doi.org/10.3390/jmse12071140

AMA Style

Li L, Zou G. A System-Level Reliability Growth Model for Efficient Inspection Planning of Offshore Wind Farms. Journal of Marine Science and Engineering. 2024; 12(7):1140. https://doi.org/10.3390/jmse12071140

Chicago/Turabian Style

Li, Linsheng, and Guang Zou. 2024. "A System-Level Reliability Growth Model for Efficient Inspection Planning of Offshore Wind Farms" Journal of Marine Science and Engineering 12, no. 7: 1140. https://doi.org/10.3390/jmse12071140

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop