1. Introduction
High-performance fiber-reinforced concrete (HPFRC) exhibits superior ductility and toughness compared to ordinary concrete [
1,
2,
3,
4]. As a result, HPFRC has found extensive applications in practical engineering in recent years [
5,
6,
7]. Concrete structures in the northern coastal regions of China are not only subject to the detrimental effects of chloride salt intrusion but also the unique freeze−thaw cycles in the northern sea areas. The nonlinear coupled effects arising from the interaction of these two factors accelerate material degradation and performance deterioration of concrete components, highlighting the prominent issue of durability [
8,
9,
10,
11]. Besides adopting measures from a structural design perspective, such as implementing protective coatings and increasing the thickness of the concrete protective layer, it is imperative to conduct research on enhancing the concrete’s inherent durability.
The durability of concrete mainly considers frost resistance and chloride ion permeability, as these two factors directly affect the long-term performance and safety of concrete structures. The frost resistance of concrete refers to its performance under low temperatures and freeze−thaw cycling conditions. The volume expansion of water in concrete during freezing may lead to the generation and expansion of microcracks, thereby reducing the structural integrity and load-bearing capacity of the concrete. In cold winter regions, frost resistance is a key factor in ensuring the integrity and safety of concrete structures; the impermeability of chloride ions involves the permeability resistance of concrete to chloride ions. The penetration of chloride ions (mainly from salt water or seawater) is one of the main reasons for the corrosion of steel bars in concrete. Corrosion of steel bars can seriously affect the integrity and durability of concrete structures. Therefore, improving the impermeability of concrete to chloride ions can effectively prevent steel corrosion and prolong the service life of concrete structures. By improving these two properties, the durability and service life of concrete structures can be significantly improved [
12,
13,
14,
15,
16]. Numerous scholars have studied the parameters that affect the durability of concrete, and the research results indicate that the frost resistance and chloride ion permeability of concrete are mainly influenced by the mix ratio of raw materials such as cement, water, aggregates, and additives [
17,
18,
19,
20,
21]. These factors determine the durability and service life of concrete structures. Currently, most people refer to the “General Concrete Mix Design Code” [
22] for mix proportion design and employ orthogonal experiments to seek the optimal mix. However, when using orthogonal experiments to find the best mix, there are drawbacks, such as a substantial workload, low predictive accuracy, and suboptimal results [
23,
24,
25]. Additionally, it cannot establish a clear functional relationship between factors and response values in a specified region [
26,
27].
To unravel the complex relationship between concrete mix proportions and resistance to freezing and chloride ion penetration, statistical models are often introduced in relevant experiments and analyses. The response surface method (RSM) is commonly used to predict the durability of concrete [
28,
29]. RSM is a product of the fusion of mathematics and statistics, capable of establishing mathematical models between multiple factors and one or more response values with minimal experimental data [
30,
31]. It evaluates the impact of interaction among factors on response values, determines the optimal response values, and offers advantages over orthogonal experiments, such as requiring fewer trials, lower costs, and higher predictive accuracy [
32]. Naraindas Bheel et al. [
33] employed RSM’s central composite design (CCD) to establish a relationship between 13 different raw material contents and eight target values in engineered cementitious composites (ECC). They validated the predicted values through experiments, and the results showed a strong correlation between the predicted values and the experimental data. Wang et al. [
34] used RSM’s central composite design to perform experimental design on basalt fiber foam concrete and achieved multi-objective optimization by incorporating utility functions. Zhang et al. [
35] utilized RSM with a Box−Behnken design (BBD) to obtain the optimal aggregate grading and admixture dosage for permeable concrete made with recycled aggregates. The aforementioned studies demonstrate that the application of RSM in optimizing construction material mixtures offers significant advantages. However, research on the application of RSM for optimizing the mix proportions of high-performance fiber-reinforced concrete is relatively scarce.
In addition, when designing the mix proportion of concrete, the economic cost requirements of the engineering application must be taken into consideration [
36]. However, there exists a conflict between the durability of concrete and the economic cost [
37]. In recent years, the nondominated sorting genetic algorithm (NSGA) has been applied to concrete mix proportion design, providing a new solution for multi-objective optimization problems (MaOPs) [
38,
39]. The basic NSGA proposed by Srinivas and Deb [
40] has been widely used to solve MaOPs, but it comes with high computational complexity. Therefore, Deb et al. [
41] proposed the NSGA-II algorithm, which incorporates elite preservation, fast nondominated sorting, and crowding distance selection operators. NSGA-II has advantages such as fast operation speed and good convergence. However, the crowding distance selection in three-dimensional and higher dimensional objective spaces may not be effective, leading to a reduction in the diversity of solutions. Reducing the complexity of the dataset may potentially improve the accuracy of deep learning models. Simplifying the process of the dataset can help deep learning models learn key features related to problems more effectively, thereby improving their performance and generalization ability. In practice, finding the appropriate level of dataset complexity often requires adjustments based on domain knowledge and experimental results [
42,
43].
Hence, Deb and Jain [
44] introduced NSGA-III. In comparison, NSGA-III directly searches for the Pareto optimal solutions in the space, eliminating issues such as transformation parameters and information loss, making the search process simple and intuitive. Furthermore, the inherent characteristics of genetic algorithms make NSGA-III widely adaptable; the combination of continuous and discrete variable inputs does not significantly affect the algorithm’s performance. NSGA-III guides the selection of nondominated solutions using uniformly distributed reference points in space, effectively ensuring the widespread distribution and diversity of nondominated solutions in high-dimensional objective spaces. In fact, NSGA-III is currently recognized as the best algorithm for MaOPs [
45,
46,
47]. At present, NSGA-III has been applied and demonstrated effective in multi-objective optimization in various fields such as automation technology, water supply, and aerospace [
48,
49,
50]. However, there is a notable scarcity of reported applications of NSGA-III in the domain of concrete mix proportion design.
This study commences with the utilization of a Latin hypercube experimental design methodology for mix proportion development. Subsequently, upon obtaining specimen samples, concrete specimens are fabricated, and frost resistance, as well as chloride ion permeability tests, are conducted. This facilitates the acquisition of the relative dynamic modulus of elasticity and chloride ion migration coefficient for concrete specimens corresponding to various mix proportions. A response surface model is then established. Subsequently, the constructed response surface model is integrated with the NSGA-III algorithm, thereby achieving multi-objective optimization for high-performance fiber-reinforced concrete.
2. Preliminary Information
2.1. Latin Hypercube Design
Before designing and optimizing the mix proportion of high fiber reinforced concrete, it is necessary first to use certain experimental design methods to sample the design space and generate a certain number of sample points. The commonly used experimental design methods include orthogonal design, uniform design, Latin hypercube sampling, etc. The Latin hypercube design (LHD) is a method used for experimental design and sampling design space, and its core idea is to ensure that each level value is evenly and randomly paired with other levels in each dimension. This design approach helps to achieve wide coverage in the design space while reducing the number of samples, which improves sampling efficiency compared to completely random sampling methods.
The key elements to ensure that the results of Latin hypercube sampling are unbiased and effective are as follows:
- (1)
Uniformity: The core goal of LHD is to ensure a uniform distribution of sample points in each dimension, ensuring comprehensive coverage of the design space.
- (2)
Randomness: By randomly selecting sample points on each dimension, LHD ensures that sufficient randomness is introduced during the sampling process so that the results are not affected by specific points.
- (3)
Reduce sample size: Compared to comprehensive sampling, LHD reduces the required sample size by effectively selecting sample points, improving sampling efficiency.
When applying LHD to design space sampling: In a multidimensional design space, LHD divides each dimension into equal intervals and selects a sample point within each interval to ensure a uniform selection of sample points throughout the entire design space. This helps to capture representative features of the design space rather than just sampling in certain local areas. The basic theory is as follows:
Assuming the probability distribution function of each element of the K-dimensional random variable
x is F
i (I = 1, 2, …,
K). The elements of vector
x are independent of each other, and each element is sampled
N times, which is the value of the jth (j = 1, 2, …,
N) sampling of the k (k = 1, 2, …,
K) th element. Define
N × K-dimensional matrix
P. Each column of
P is composed of a random arrangement of elements in the sequence {1, 2, …,
N}. If the random variable
ξjk follows a uniform distribution on the interval [0,1], the result obtained after sampling is:
In the equation, pjk is N × The j row and k column elements of the K-dimensional matrix P.
Assuming the existence of function h(
x), the unbiased estimate of the mean E(h(
x)) of function h(
x) is defined as:
The variance of the unbiased estimate
for simple random sampling is:
The variance of the unbiased estimation of Latin hypercube is:
It can be proven that the probability of (N − 1)cov(h(x1n), h(x2n))/N approaches a negative value. Therefore, Latin hypercube sampling is easier to converge than random sampling.
The key factor in ensuring unbiased and efficient results when dividing the experimental domain of LHD lies in its design method, which ensures the representativeness of the samples by uniformly and randomly selecting sample points. This helps to explore the design space more effectively in tasks such as experimental design and parameter optimization, reducing the number of required experiments and improving the efficiency and cost-effectiveness of experiments. Numerous scholars have further verified the above viewpoint through theoretical research [
51,
52]. Therefore, we select LHD to determine the sample points required for concrete mix design.
2.2. Response Surface Model
Response surface methodology is a method of optimizing experimental conditions suitable for fitting the complex nonlinear response relationship between optimization objectives and experimental factors. The multivariate second-order response surface model is generally represented by the following equation.
In the equation, y(x) represents the response objective function; xi, xj represents the i-th and jth experimental factors; β0 represents a constant term, βi, βii, βij represents various coefficients; m represents the number of parameters to be optimized.
2.3. NSGA-III Algorithm
Nondominated sorting genetic algorithm III (NSGA-III) is a widely used multi-objective optimization (MO) algorithm designed to solve two types of problems: maintaining good solution diversity and optimizing solution convergence. This is an improved version that compensates for the shortcomings of its predecessor, NSGA-II, in losing solution diversity and accuracy when dealing with high-dimensional problems.
The core operations of NSGA-III include nondominated sorting, calculation of crowding distance, evolutionary operations (selection, crossover, and mutation), and environmental selection. Its unique features and mechanisms are mainly reflected in the following points:
Reference point mechanism: NSGA-III introduces the concept of reference points to improve the diversity of solutions. During the initialization phase, the algorithm generates a set of reference points. These reference points are used in each generation to select solutions and create the next generation. The solutions are selected to minimize their distance from the reference point. This ensures the distribution and coverage of the understanding.
Multiple nondominated levels: NSGA-III implements multiple nondominated sorting of solutions. The solution is divided into several nondominated layers, each layer being superior to its lower layer. In each generation, the algorithm prioritizes solutions from higher levels.
Crowding distance: In order to maintain population diversity, NSGA-III uses a crowding distance mechanism. Among solutions with the same level, solutions with lower crowding (i.e., solutions with more “space” around them) will be preferred. This helps to prevent the algorithm from overly focusing on a small portion of the search space, thereby achieving diversity of understanding.
Additional parents: When selecting solutions to create the next generation, NSGA-III not only considers the current parents (so-called P population) but also considers new possible solutions generated through offspring (so-called Q population). This is also known as a “joint population”, and this design can increase the diversity of solutions and accelerate the speed of evolution.
Special environment selection strategy: When a new P population needs to be selected, NSGA-III will first select nondominated solutions and add excess solutions to the population according to the reference point allocation strategy, which ensures the convergence of the solution in multi-objective optimization problems.
Overall, NSGA-III effectively addresses multi-objective optimization problems through these mechanisms, overcomes weaknesses in the diversity of solutions, and provides uniformly distributed solutions at the Pareto frontier, thereby enhancing the convergence of the algorithm. This characteristic makes NSGA-III perform well in handling practical engineering problems such as high-performance fiber-reinforced concrete. Meanwhile, to balance the relationships between objective functions, an adaptive normalization technique is introduced. The ideal point for the population, S
t = F
1 ∪ F
2 ∪ … ∪ F
l, is defined as the minimum point attained by the population St on each respective objective. When normalizing multiple objectives, it is necessary to construct hyperplanes by seeking limit points to determine intercepts. Subsequently, the obtained intercepts are utilized to normalize the objectives individually. Considering that the mixed NSGA-III produces a Pareto solution set that closely approximates the actual optimal solution set of the problem, the obtained Pareto solution set after multi-objective optimization can be considered the final optimal solution. Therefore, the corresponding maximum value of the
i-th objective in the corresponding population can be used to replace the intercept of the corresponding objective.
In the formula, M represents the number of targets; x represents the decision variable; fi(x) represents the target value of x on the corresponding i-th target; zimin and zimax represent the minimum and maximum values of the population on the i-th target, respectively; fin(x) represents the normalized target value of the i-th target.
6. Conclusions
Currently, high-performance fiber-reinforced concrete (HPFRC) finds extensive applications in both domestic and international practical engineering projects. However, as the service environments for concrete structures become increasingly harsh, there is a growing demand for enhanced durability. Rational concrete mix design plays a crucial role in improving the durability of high-performance fiber-reinforced concrete, increasing the service life of concrete components, and reducing the overall life-cycle maintenance costs. Nevertheless, another essential objective in concrete mix design is cost reduction, which can sometimes conflict with the goal of improving durability. Traditional concrete mix design methods suffer from issues such as low efficiency and suboptimal results, making the multi-objective optimization of durability and economic costs for high-performance fiber-reinforced concrete a challenging task. Therefore, this article introduces an intelligent optimization framework based on hybrid algorithms. A Latin hypercube experimental design method is employed, considering factors such as water content, cement content, fly ash content, fine aggregate content, coarse aggregate content, superplasticizer dosage, and fiber dosage. Evaluation criteria include the relative dynamic modulus of elasticity, chloride ion resistance, and economic considerations. Response surface prediction models are established for each evaluation criterion. The NSGA-III algorithm is utilized within the RSM model to autonomously search for the optimal mix design that maximizes overall performance. Based on the optimization results and comparative experiments, the intelligent framework proposed in this article, leveraging hybrid algorithms, effectively optimizes the mix proportion of high-performance fiber-reinforced concrete. It not only meets durability requirements to a certain extent but also ensures cost control.
The hybrid algorithm proposed in this article can achieve multi-objective optimization of high-performance fiber-reinforced concrete within a certain range, but it also has certain limitations. The performance of machine learning models is usually influenced by the amount of training data, reliability, and complexity of the data. Even larger datasets may not necessarily improve the accuracy of the model. In machine learning, this can refer to “Kolmogorov complexity”, which is the length of the shortest computer program that produces output. By reducing the complexity of the dataset, we can reduce the computational burden on the model when processing data, making it easier to capture patterns and correlations in the data. In this way, the model can predict and classify more accurately, thereby improving its accuracy. Therefore, when designing and preparing datasets, we should, to some extent, simplify the structure and features of the dataset to improve the performance of deep learning models.
The 36 sets of data used for training the model in this study were all from the same engineering project. Therefore, trained models may perform poorly in predicting specific data for other projects. In future research, collecting more diverse and specific data can better cover the characteristics and changes of different engineering projects and, to some extent, improve the generalization ability of the model. This means that the trained model can more accurately predict the specific data of other projects, rather than being limited to engineering projects with training data sources; it can reduce the bias and variance of the model and improve its reliability. This means that the model is more accurate and reliable in predicting and optimizing engineering materials. At the same time, constructing a hybrid algorithm that considers more parameters and objectives, further improving the effectiveness of multi-objective optimization, and promoting the development of new material design and optimization methods is our next research direction.