Next Article in Journal
Research on Indoor 3D Semantic Mapping Based on ORB-SLAM2 and Multi-Object Tracking
Previous Article in Journal
Bridging Computational Structures with Philosophical Categories in Sophimatics and Data Protection Policy with AI Reasoning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Prediction Model of Component Content Based on Improved Black-Winged Kite Algorithm-Optimized Stochastic Configuration Network

1
School of Metallurgical Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, China
2
Ganjiang Innovation Institute of the Chinese Academy of Sciences, Ganzhou 341000, China
3
Jiangxi Province Key Laboratory of Cleaner Production of Rare Earths, Ganzhou 341000, China
4
China Rare Earth Jiangxi Rare Earth Co., Ltd., Ganzhou 341000, China
5
School of Electrical and Automation Engineering, East China Jiaotong University, Nanchang 330013, China
6
Key Laboratory of Advanced Control and Optimization in Jiangxi Province, Nanchang 330013, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(20), 10880; https://doi.org/10.3390/app152010880
Submission received: 15 September 2025 / Revised: 7 October 2025 / Accepted: 7 October 2025 / Published: 10 October 2025

Abstract

Accurate prediction of component content in the rare-earth extraction and separation process is crucial for control system design, product quality control, and optimization of energy consumption. To improve prediction accuracy and modeling efficiency, this paper proposes a model for predicting component content based on an Improved Black-winged Kite Algorithm-Optimized Stochastic Configuration Network (IBKA-SCN). First, we develop an Improved Black-winged Kite Algorithm (IBKA), incorporating good point set initialization and Lévy random-walk strategies to enhance global optimization capability. Theoretical convergence analysis is provided to ensure the stability and effectiveness of the algorithm. Second, to address the issue that constraint parameters and weight-scaling factors in Stochastic Configuration Network (SCN) rely on manual experience and struggle to balance accuracy and efficiency, IBKA is employed to adaptively search for the optimal hyperparameter combination. The applicability of IBKA-SCN is corroborated through four real-world regression tasks. Finally, the effectiveness of the proposed method is validated through an engineering case study on predicting component content. The results show that IBKA-SCN significantly outperforms existing mainstream methods in both prediction accuracy and modeling speed.

1. Introduction

Rare-earth elements (REEs) have become national strategic resources due to their irreplaceability in critical fields such as electronics, aerospace, and defense. China has the world’s largest reserves of rare earths and is a global leader in solvent extraction and separation technology [1]. However, the current production process still relies heavily on offline sampling, empirical adjustments, and manual intervention, resulting in low automation levels, inefficient operations, high energy consumption, and significant product quality fluctuations, which pose major obstacles to industrial upgrading. Real-time monitoring of the concentration of individual rare-earth elements within extraction mixer settlers is a prerequisite for optimizing the ratio of extractants and scrubbing agents and for achieving closed-loop control. Currently, industrial practices mainly depend on offline testing methods such as ICP-AES (Inductively Coupled Plasma–Atomic Emission Spectrometry) [2], ICP-MS (Inductively Coupled Plasma–Mass Spectrometry) [3], XRF (X-Ray Fluorescence Spectrometry) [4], and UV-Vis spectrophotometry [5]. These techniques are characterized by long detection cycles, potential radiation hazards, complex instrumentation, and high maintenance costs, making them unsuitable for real-time control requirements. Consequently, research into rapid detection methods for rare-earth element content in extraction processes is of significant academic value and engineering importance.
In recent years, with the proliferation and application of intelligent modeling methods such as support vector machines and neural networks in process control, data-driven soft sensor modeling approaches have also been increasingly adopted in rare-earth extraction and separation processes. References [6,7] developed soft sensors for predicting rare-earth component content by using key influencing factors, such as feed flow rate, extractant flow rate, scrubbing solution flow rate, and feed composition, as auxiliary variables, and the corresponding element content as the primary variable.These models, integrated with conventional machine learning algorithms, achieved predictions that meet actual production standards. Reference [8] investigated a model relating first-order moments of image color features in H S I color space to component content, enabling the detection of individual element concentrations in mixed solutions. To address real-time production requirements, reference [9] proposed an improved GRA-JITL-LSSVM model for online monitoring of component content in the rare-earth extraction process. This approach uses gray relational analysis (GRA) to assess trends and correlations between input and output variables and introduces a database update criterion to enhance anti-interference capability. A genetic algorithm with a stagnation backtracking strategy (SBS-GA) was also proposed to ensure global optimization of model parameters. With advances in artificial intelligence, deep learning has become a major focus in machine learning. Reference [10] employed a convolutional neural network (CNN) to extract abstract representations from raw images of P r / N d (Praseodymium/Neodymium) mixed solutions and constructed a regression model using a deep neural network to predict the content of each element. Reference [1] proposed a soft measurement method integrating a transfer-learning-based residual attention convolutional network, which takes both explicit and implicit features of rare-earth solutions as model inputs. The method uses a one-dimensional CNN integrated with multi-residual attention blocks to alleviate gradient vanishing or explosion issues and incorporates transfer learning to significantly improve the training effectiveness of the target network. However, as network depth and parameter count increase, training time may become prolonged, and overfitting can become more severe. Moreover, in actual rare-earth extraction processes, changes in operating conditions significantly affect component content, and the presence of non-stationary data leads to poor convergence in the prediction models previously mentioned.
A Stochastic Configuration Network (SCN) is a novel feedforward neural network and has been widely applied in data-driven modeling in various engineering cases [11]. For instance, reference [12] implemented a soft sensor for ammonia nitrogen concentration detection using a genetic algorithm-optimized SCN. In reference [13], an improved zebra optimization algorithm was employed to determine the optimal hyperparameter combination of the SCN, enhancing network performance for short-term photovoltaic power forecasting. Reference [14] introduced a bidirectional SCN (BSCN) with a semi-random learning mechanism and applied it. Reference [15] utilized an L 2 -regularized SCN to identify the operating conditions of ball mill loads, thereby improving ore grinding efficiency and reducing operational costs. Reference [16] proposed a two-dimensional SCN (2DSCN) for modeling image datasets. Furthermore, reference [17] extended the original SCN by developing a block-incremental SCN to improve learning efficiency and enable rapid modeling in industrial processes. In the context of predicting rare-earth component content, several researchers have also adopted SCN-based data-driven modeling approaches. For example, reference [18] applied an improved differential evolution algorithm to optimize the weights and biases of nodes in a block-incremental SCN, resulting in a more compact model for predicting rare-earth element concentrations. However, the hyperparameters of SCN are often set empirically, and different combinations can significantly impact network performance. There is a critical need for systematic methods to adaptively optimize these hyperparameters to enhance the applicability and reliability of SCN in rare-earth extraction processes.
Evolutionary computation employs optimization methods inspired by biological evolution. Its core idea originates from natural selection and mutation, achieving optimization through the evolution of population fitness [19]. Furthermore, this approach has been successfully applied to the training process of neural networks. With the continuous development of meta-heuristic algorithms, swarm intelligence optimization algorithms—similar to evolutionary computation—draw inspiration from the social behaviors of biological organisms to simulate various collective behaviors. Examples include the Gray Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), and Dung Beetle Optimizer (DBO). Such algorithms have been widely used in various fields, such as energy-efficient train operation, fault detection, and network optimization. The Black-winged Kite Algorithm (BKA) is a novel swarm intelligence optimization algorithm proposed in 2024 [20]. Inspired by the attacking and migratory behaviors of black-winged kite populations, the algorithm iteratively moves individuals toward optimal positions by simulating these behaviors. It offers advantages such as high convergence accuracy, few adjustable parameters, and low complexity. Therefore, this paper adopts BKA to optimize the key hyperparameters of the Stochastic Configuration Network (SCN). However, the standard BKA suffers from insufficient population diversity and a tendency to fall into local optima. To address these limitations, an improved BKA is proposed, aiming to enhance its population diversity and global search capability. The main innovations and contributions of this paper are summarized as follows:
  • The global search capability of the BKA is enhanced by incorporating good point set initialization and Lévy flight random-walk strategies, and its convergence is rigorously proven and analyzed.
  • The IBKA is employed to optimize the constraint parameters and weight-scaling factors in the hyperparameter tuning of the SCN, resulting in a novel method named IBKA-SCN.
  • The proposed IBKA-SCN is applied to predict rare-earth element component content in a practical engineering case. Experimental results demonstrate that the prediction accuracy of SCN is significantly improved, and the performance of IBKA-SCN meets the requirements of industrial applications.
The structure of this paper is as follows, with an overview provided in Figure 1: Section 1 of the text serves as an introduction, in which the research background and its significance are outlined. Section 2 reviews relevant foundational work, including the SCN model and BKA algorithm. Section 3 elaborates on the improvement strategy for IBKA and presents experimental analysis. Section 4 details the construction of the IBKA-SCN model. Section 5 explains how the experimental validation and analysis were conducted. Finally, Section 6 summarizes the findings and outlines future work.

2. Related Work

In this section, we review the classical theory of SCN and BKA.

2.1. Stochastic Configuration Network

SCN is a supervised random weight neural network proposed by Wang et al. in 2017 [11]. It contains an input layer, an implicit layer, and an output layer. The fundamental premise entails the initiation of the network structure with a minimal configuration, encompassing the supervision mechanism for the parameters of the hidden layer nodes. Thereafter, the augmentation of hidden layer nodes is to be executed in an incremental manner until the network attains a tolerance error within its boundaries. Its network structure is shown in Figure 2:
The key to SCN lies in its unique supervisory mechanism, which guarantees the network’s ability to make universal approximations. Given an objective function f ( x ) : R n R m , we assume that L 1 nodes have been constructed ( f L 1 ( x ) = i = 1 L 1 β i g i ( w i T x + b i ) ( L = 1 , 2 , . . . , f 0 = 0 ) ). Here, β i denotes the output weight of the i(th) node, and g i represents the activation function. This paper uses the sigmoid function as the activation function for the SCN and represents the residual error as e L 1 = f f L 1 = e L 1 , 1 , . . . , e L 1 , m . In the event that the residual error value at this juncture does not fall within the permissible error range, it is necessary to generate an additional node and calculate the output weights that compensate for the residual error. This results in the model output becoming f L = f L 1 + β L g L . The weights and biases for the newly added node are randomly assigned by the supervision mechanism.
Given 0 < r < 1 and a non-negative real sequence { μ L }, where lim L μ L = 0 and μ L ( 1 r ) , For L = 1 , 2 , , the following is defined:
σ L = q = 1 m σ L , q , σ L , q = ( I r μ L ) e L 1 , q 2
If the random basis function g i satisfies the inequality constraints in (2), the universal approximation property of the SCN is guaranteed.
e L 1 , q , g L 2 b g 2 δ L , q , q = 1 , 2 , , m
After satisfying the inequality constraints for the randomly generated parameters of the Lth node, the output weights of the SCN are recalculated as shown in (3):
β * = arg min β H L β T F 2 = H L T
where H L is the Moore–Penrose generalized inverse [21] and · F represents the Frobenius norm.

2.2. Black-Winged Kite Algorithm

The Black-winged Kite Algorithm (BKA) is a novel swarm intelligence optimization algorithm inspired by the behavioral patterns of black-winged kites in their natural habitat, including cooperative hunting and migratory movement.
Similar to most optimization algorithms, BKA employs random initialization to assign the position of each individual during the population initialization phase, as shown in (4):
X i = B K l b + rand ( B K u b B K l b ) , i = 1 , 2 , , N B
where N B denotes the population size, X i represents the position of the i-th black-winged kite individual, B K u b and B K l b refer to the upper and lower bounds of each dimension of the individual’s position, respectively, and rand is a random number uniformly distributed in [0, 1].
During the attack phase, the black-winged kite population exhibits two distinct hunting behaviors: aerial circling and diving. These behaviors are quantitatively characterized in (5) and (6).
Y r + 1 i , j = y t i , j + n ( 1 + sin ( r ) ) × y t i , j , p < r y t i , j + n × ( 2 r 1 ) × y t i , j , else
n = 0.05 × e 2 x t T
Here, y t i , j and y t + 1 i , j denote the position of the j-th dimension of the i-th black-winged kite individual at the t-th and ( t + 1 ) -th iteration steps, respectively. r is a constant value set to 0.9; p is a random number uniformly distributed between 0 and 1; T represents the total number of iterations; and t indicates the current iteration step.
During the migration phase, individual black-winged kites complete their migration by either continuing to follow the leader or leading a new population, as shown in (7) and (8):
Y r + 1 i , j = Y t i , j + C ( 0 , I ) × Y t i , j L t j , F i < F r i Y t i , j + C ( 0 , I ) × L t j m × Y t i , j , else
m = 2 × sin ( r + π / 2 )
where L t j denotes the position in the j-th dimension of the best individual at the t-th iteration; F i represents the fitness value of the i-th individual; F r i refers to the fitness value of a randomly selected individual; and C ( 0 , 1 ) indicates a Cauchy mutation operator.

3. Improved Black-Winged Kite Algorithm

In Section 2.2, the conventional BKA algorithm was introduced. As is typical of most intelligent optimization algorithms, BKA employs random initialization to assign positions in the initial population. This may result in an uneven distribution of the initial population and hinder the algorithm’s global search capability. Furthermore, it has been observed that the individual position update step size during the attack phase of the BKA algorithm does not encompass the entire global space, thereby impeding the algorithm’s global search and convergence. This section proposes an enhancement to the BKA algorithm by initializing the population using a set of optimal points and incorporating a Levy random-walk strategy to update individual positions during the attack phase.

3.1. Population Initialization Based on Good Node Sets

Population initialization directly impacts the convergence speed and accuracy of intelligent optimization algorithms. Randomly generated populations exhibit uneven distribution across the entire solution space, clustering densely in some regions while remaining sparse in others. This results in inefficient utilization of the search space and weak population diversity. To address this, the optimal point set method is employed to generate the initial population for the black-winged kite [20]. The theory of optimal point sets originates from the renowned Chinese mathematician Hua Luogeng. We let H s denote the unit cube in d-dimensional Euclidean space, containing the following point set:
P n ( k ) = r 1 ( n ) k , r 2 ( n ) k , , r d ( n ) k , 1 k n
Its deviation satisfies
φ ( n ) = C ( r , ε ) n 1 + ε
where C ( r , ε ) is a constant with ε > 0 , and r i is calculated as follows:
r i = 2 cos 2 π i p , 1 i d
where p is the smallest prime number satisfying ( p 3 ) / 2 d . Thus, P n ( k ) is referred to as a good point set. By mapping P n ( k ) to the feasible region of the population solutions, the population initialization update formula becomes
X i = B K l b + ( B K u b B K l b ) { P n ( k ) } , i = 1 , 2 , . . . , n
where x i denotes the initial position of the i-th individual, B K u b represents the upper bound, and B K l b indicates the lower bound.

3.2. Lévy Random-Walk Strategies

The position update strategy for the BKA during the attack phase is shown in (5) and (6). As the number of algorithm iterations increases, the value of coefficient n monotonically decreases with the iteration, even if the initial n value of 0.05 during early iterations results in smaller update steps, hindering the algorithm’s global search and convergence. Therefore, a Levy random-walk strategy is proposed for the black-winged kite’s attack phase to enhance its global exploration capability.
Levy random walk is a random-walk strategy based on the Levy distribution, characterized by heavy tails that yield a higher probability of extreme values occurring in the tail region [22]. Its probability density function follows a power-law relationship, implying that relatively large step events (i.e., long-distance movements) occur more frequently than in normal distributions or other common distributions. The mathematical expression for step updates in a Lévy random walk is as follows:
L e v y = α u v 1 / φ
where α is an adjustable scaling coefficient; u and v are parameters following a standard normal distribution; and u N 0 , σ u 2 , v N 0 , σ v 2 and σ u are calculated as follows:
σ u = γ ( 1 + φ ) sin ( π φ / 2 ) γ ( 1 + φ ) / 2 φ 2 ( φ 1 ) / 2 1 φ
where σ v = 1 , γ ( ) denotes the Gamma function, and φ = 1.5 . Therefore, γ ( 1 + φ ) = 3 π 4 1.3293 .
After incorporating the Levy random walk into the BKA attack phase, the position update formula for the black-winged kite becomes
y t + 1 i , j = y t i , j + L e v y ( 1 + sin ( r ) ) × y t i , j p < r y t i , j + L e v y × ( 2 r 1 ) × y t i , j e l s e
where L e v y denotes the update step size following a Lévy distribution; r is a random number uniformly distributed between 0 and 1; and p is a constant value set to 0.9.
In order to provide an effective illustration of the manner in which the Levy random-walk strategy serves to augment the algorithm’s exploratory capacity, Figure 3 presents a graphical representation of the evolution of the individual step size update of the original BKA as a function of the number of iterations subsequent to the integration of the Levy random-walk strategy during the attack phase.
Figure 3 reveals that, in the attack phase, the original BKA updates its step size within an extremely narrow range from the very beginning of the iteration and continuously shrinks this range as the search proceeds. This aggressive attenuation markedly weakens the algorithm’s global exploration capability and substantially increases the risk of entrapment in local optima. In sharp contrast, IBKA incorporates a Lévy-flight-based random walk into the same phase. The heavy-tailed distribution of the Lévy steps enables each individual to perform extensive jumps while still refining local solutions, thereby maintaining an effective balance between exploration and exploitation. Consequently, the proposed mechanism significantly enlarges the covered search space and equips the algorithm with a pronounced ability to escape local extrema and approach the global optimum.
The algorithm flowchart for IBKA is shown in Figure 4.

3.3. IBKA Experimental Validation and Analysis

This section focuses on the utilization of three established multi-peak test functions for evaluation: M1 (Egg Holder Function), M2 (Holder Table Function), M3 (Cross-In-Tray Function) and M4 (Kowalik Function), as delineated in Table 1. The purpose is to assess the optimization efficacy of the IBKA method proposed in this study. For this assessment, we conducted comparative analysis with the standard BKA, GWO (Gray Wolf Optimizer), SSA (Sparrow Search Algorithm), and WOA (Whale Optimization Algorithm). The population size of the algorithm was set to 100, and the maximum number of iterations was set to 1000.
For a streamlined presentation of the experimental outcomes, three key metrics are employed as evaluative indices: Maximum Absolute Error (MAE), Least Absolute Error (LAE), and Root Mean Square Error (RMSE). Each of the functions—M1, M2, M3, and M4—undergoes 20 individual tests, after which the average values of these performance indices are computed. The results of these tests are compiled and presented in Table 2.
It is evident from an analysis of the evaluation metrics presented in Table 3 that the optimization experiments utilizing multimodal functions M1, M2, M3, M4, and M5 yielded the following conclusions. A comparison of the five algorithms reveals substantial disparities in their performance. The Root Mean Square Errors for IBKA are as follows: 8.1584 × 10 9 , 1.0548 × 10 10 , 2.3559 × 10 9 , 4.1853 × 10 4 , and 1.4845 × 10 2 . The algorithm demonstrated a capacity for convergence to optimal levels across all five test functions, achieving the most efficient RMSE results. In the context of high-dimensional regression functions (for instance, function M5), the BKA, GWO, SSA, and WOA algorithms frequently converged to local optima, thereby becoming trapped in suboptimal solutions. In contrast, the IBKA algorithm demonstrated exceptional performance, consistently finding superior solutions in these complex scenarios. This capability is evident from averaging the RMSE across 20 independent tests, clearly showing that IBKA reliably targets and achieves global optima. The efficacy of the algorithm is evidenced by the presence of smaller and more uniform error boundaries when compared to alternative algorithms. IBKA demonstrates superior performance in comparison to the majority of available algorithms with regard to both MAE and LAE. This is particularly evident in M2 and M5, where BKA, GWO, SSA, and WOA appear to be trapped in local optima; IBKA nevertheless exhibits remarkable optimization capabilities. This phenomenon is attributed to the initial population’s richness and the robust global search ability enabled by the levy random-walk strategy in the attack phase.
In summary, the IBKA algorithm presented in this paper exhibits enhanced global and local search abilities when compared to the standard BKA and the other compared algorithms.

3.4. IBKA Convergence Analysis

In order to demonstrate the global convergence of the proposed IBKA, Theorems 1 and 2 are introduced. Before delving into their proofs, two definitions and two lemmas are presented, laying the groundwork for the subsequent theorem proofs.
Definition 1.
In IBKA, under the assumption that the population size of black-winged kites is N, the solution dimension is d, the fitness function is f ( x ) , and the position of the i-th black-winged kite individual at iteration t is denoted as x i ( t ) . The optimal solution is f b e s t , the optimal state set is S = { x 1 , x 2 , , x N } , and x i satisfies f ( x i ) = f b e s t , where 1 i N .
Definition 2.
For a random sequence ζ i (where i = 1 , 2 , , N , and N may be infinite), there exists a real number ζ R and a sequence of random events { B k } such that either of the following conditions is satisfied:
  • Direct Convergence Conditions: P ( lim i ζ i = ζ ) = 1 .
  • Condition on the Limit Supremum: For the case where ε > 0 , we have P j = 1 k = j B k = 0. It follows that the stochastic sequence ζ i converges almost surely to ζ, i.e., P ( lim i ζ i = ζ ) = 1 .
Lemma 1
([23]). (Borel–Cantelli) Suppose that { e i } ( i = 1 , 2 , N ) is a sequence of mutually independent events in a probability space. If the sum of the probabilities of all events is finite, i.e., i = 1 P ( E i ) < , then the probability of the event with the upper limit is 0, i.e., P ( j = 1 i = j E i ) = 0 . The inverse holds as well: if the sum of the probabilities of all events diverges, then the probability of the event with the upper limit is 1.
For the description of the proof process of Theorems 1 and 2 later, Lemma 2 is proved briefly as follows:
Proof of Lemma 1. 
Let ( E n ) be a sequence of events in some probability space. Assume that the sum of their probabilities is finite:
n = 1 P ( E n ) <
Equivalently, the infinite series of positive terms converges; hence, by standard results on convergent series, the lower limit of the remainder term n = N P ( E n ) is zero:
inf N 1 n = N P ( E n ) = 0 .
Consequently,
P lim sup n E n = P N = 1 n = N E n inf N 1 P n = N E n inf N 1 n = N P ( E n ) = 0
Lemma 2
([24]). Within the state space of IBKA, μ ( 0 , 1 )   s . t .   P i j < μ , where P i j denotes the probability of the black-winged kite population transitioning from state i to state j.
The following presents Theorem 1 and Theorem 2 along with their respective proofs, theoretically demonstrating that IBKA converges with probability 1.
Theorem 1.
The solution sequence { x i ( t ) ,   t > 0 } of IBKA is a finite-order Markov chain.
Proof of Theorem 1. 
The position of each black-winged kite individual in IBKA is iteratively updated within a finite state space. The improvements introduced in this paper, such as the use of good point set for population initialization and the adoption of a Lévy random-walk strategy to dynamically update step sizes during the attack phase, are mutually independent. Moreover, the position of each black-winged kite individual is updated progressively as the number of iterations increases. In other words, the state of the black-winged kite population at generation t is influenced solely by the state at generation t 1 . Therefore, the solution sequence { x i ( t ) ,   t > 0 } of IBKA constitutes a finite-order Markov chain. □
Theorem 2.
IBKA converges to the global optimum with probability 1.
Proof of Theorem 2. 
In Definition 1, the population size of the black-winged kite is denoted as N, the solution dimension as d, and the global best solution as f b e s t . As established in Theorem 1, the solution sequence of IBKA forms a finite-order Markov chain. Then, for ε > 0 , denoting the current best solution at the t-th iteration as f t , two possible cases arise, denoted as f t f b e s t ε or f t f b e s t ε . We assume that at each iteration, the probability that the state of the black-winged kite population satisfies f f b e s t ε is P. According to Lemma 2, we have t = 1 P < t = 1 μ < . From Lemma 1, it can be deduced that P j = 1 t = j | f f b e s t | ε = 0 , which satisfies the condition of the limit supremum in Definition 2. Therefore, IBKA must converge to f b e s t . □

4. The Establishment of the IBKA-SCN Model

The hyperparameter settings in the Stochastic Configuration Network (SCN) directly influence both the performance and modeling efficiency of the network. Among these, the two most critical hyperparameters are r and λ . A relatively small value of r implies a looser inequality constraint. This value gradually increases during the construction process of the network until it approaches 1. λ serves as a scaling factor. The parameters of the candidate nodes in the hidden node pool are randomly generated within the range [ λ ,   λ ] . Those nodes that satisfy the inequality constraint, as shown in Formula (2), are selected as new nodes and added to the hidden layer of the network.
In the conventional SCN, the hyperparameters r and λ are predefined as non-negative increasing sequences, such as r = { 0.9 ,   0.99 ,   0.999 ,   0.9999 ,   0.99999 } and λ = { 0.5 ,   1 ,   5 ,   10 ,   30 ,   50 ,   100 ,   150 ,   200 ,   250 } . This approach not only increases the complexity of the model but also reduces its construction efficiency and makes it more prone to overfitting. The selection of r and λ should be data-dependent rather than relying on a fixed, manually specified sequence. Therefore, to improve the accuracy of SCN in predicting rare-earth element component content, we propose the use of the Improved Black-winged Kite Algorithm (IBKA) to optimize the hyperparameter combinations of SCN. This method is named IBKA-SCN. The detailed algorithm workflow is as follows:
Step 1: Set the initial black-winged kite population size to N, the solution dimension to D, and the maximum number of iterations to T. Define the upper bound U b and lower bound L b for the hyperparameters r and λ . Specify the maximum number of hidden nodes L m a x , the maximum number of candidate nodes T m a x , and the tolerance error ε .
Step 2: Employ the Root Mean Square Error ( R M S E ) as the fitness function of the Improved Black-winged Kite Algorithm (IBKA) to explore the optimal combination of hyperparameters for the SCN.
Step 3: Determine whether the IBKA-SCN model has reached the preset tolerance error or the maximum number of iterations.
Step 4: Assign the optimal hyperparameters, corresponding to the best solution (position) found by the Improved Black-winged Kite Algorithm (IBKA), to the SCN. Then, establish a predictive model for rare-earth element component content using the SCN with these optimized parameters.
The pseudocode of IBKA-SCN is presented in Algorithm 1.
Algorithm 1: IBKA-SCN
Applsci 15 10880 i001

5. Experiments and Result Analysis

The experiments were conducted on the MATLAB 2022b (MathWorks, Natick, MA, USA) (version 9.13.0) platform using the following hardware specifications: an Intel® Core™ i5-14650HX (Intel Corporation, Santa Clara, CA, USA) 2.20 GHz processor, 16 GB of RAM (Samsung Electronics, Seoul, South Korea), a 64-bit Windows (Microsoft Corporation, Redmond, WA, USA) (version 22H2, build 22621) operating system, and an NVIDIA GeForce RTX 2060 (NVIDIA Corporation, Santa Clara, CA, USA) graphics card.

5.1. Performance of SCN Based on the IBKA Algorithm

This section evaluates the generalization capability of IBKA-SCN using four regression datasets from the KEEL (Knowledge Extraction based on Evolutionary Learning) repository, as detailed in Table 3; these datasets have been widely adopted in references [25], enabling a direct comparison under identical experimental settings. A comparative analysis is conducted for IBKA-SCN, IBKA1-SCN (IBKA-SCN only with good node sets), IBKA2-SCN (IBKA-SCN only with Lévy random-walk strategies), BKA-SCN (SCN with Black-winged Kite Optimization Algorithm), IRVFL [25], SCN, BSCN [17] (SCN with block increments), WOA-SCN (SCN with Whale Optimization Algorithm), and GWO-SCN (SCN with Gray Wolf Optimizer) to validate the effectiveness of the proposed method. All comparison models underwent the same data pre-processing procedures and dataset splits. Their hyperparameters were also systematically optimized to achieve peak performance. In IRVFL, the weights and biases are randomly assigned from a uniform distribution within the range [ 1 , 1 ] . For SCN and its variant, the hyperparameters r and λ are set to { 0.9 ,   0.99 ,   0.999 ,   0.9999 ,   0.99999 } and λ = { 0.5 ,   1 ,   5 ,   10 ,   30 ,   50 ,   100 ,   150 ,   200 ,   250 } , respectively. In SSA-SCN, WOA-SCN, GWO-SCN, IBKA-SCN, IBKA1-SCN, and IBKA2-SCN, the population size of the algorithm was set to 100, the maximum number of iterations was set to 1000, and the lower and upper bounds for r are set to 0.9 and 0.9999, respectively, while for λ , they are set to 0.5 and 250, and the tolerant error is set to 0.005. To ensure a fair evaluation, 80% of each dataset is randomly selected as the training set, with the remaining 20% used for testing. All input features are normalized to mitigate the impact of varying data scales across datasets.
The test results of IRVFL, SCN, SSA-SCN, and IBKA-SCN on the four datasets are presented in Table 4, Table 5, Table 6 and Table 7. Each model is validated through sampling from 50 repeated random subsets. The performance was evaluated using the model construction time (T), standard deviation, and the mean of the Root Mean Square Error ( R M S E , as shown in (19)), along with the average training time.
R M S E = 1 N i = 1 N ( y i y ^ i ) 2
Here, y i denotes the predicted output of the model, y ^ i represents the actual value of the sample, and N is the number of samples.
A comprehensive analysis of the results presented in Table 4, Table 5, Table 6 and Table 7 reveals that IRVFL exhibits significantly lower accuracy in both training and testing across all four datasets compared to the other algorithms, making it inadequate for practical applications. The experimental findings of this study are primarily centered around model training efficiency and prediction accuracy. Firstly, in terms of training efficiency, the BSCN model, leveraging its unique node addition mechanism, significantly shortened the training cycle compared to the standard SCN. Similarly, employing meta-heuristic algorithms to optimize the key hyperparameters ( λ and r) of SCN also effectively reduced its training time. However, hyperparameter optimization demonstrated greater value in terms of prediction accuracy. Experiments showed that the SCN model with optimized λ and r achieved higher prediction accuracy than both the original SCN and BSCN. Among all models, the IBKA-SCN, which integrates all optimization strategies, attained the highest prediction accuracy across all tested datasets. The superiority of this model was explained at a mechanistic level through ablation studies. Compared to the original baseline BKA-SCN, the introduction of the good point set initialization strategy in IBKA1-SCN established a better foundation for the optimization process by enhancing population diversity. Building upon this, the further incorporation of the Lévy flight random-walk strategy in IBKA2-SCN strengthened its global search capability and ability to escape local optima. Ultimately, the IBKA-SCN model, which amalgamates all the aforementioned strategies, achieved the optimal performance, confirming the effectiveness and synergistic effect of each improvement measure.
In conclusion, the proposed IBKA-SCN method not only enhances the regression accuracy of SCN but also effectively reduces training time, demonstrating a combination of high precision and computational efficiency.

5.2. Comparison Experiment of Rare-Earth Element Component Content Prediction Based on IBKA-SCN

Rare-earth extraction and separation is a process that involves separating and purifying mixed rare-earth solutions to obtain target purity products. We consider the extraction separation process for two components A and B, where A is the easily extractable component and B is the difficult-to-extract component. The production separation flow is shown in Figure 5. From left to right, the diagram includes an extraction section comprising n stages of mixed settling tanks, followed by a washing section comprising m stages of mixed settling tanks. Here, u 1 denotes the feed flow rate of the rare-earth solution, u 2 represents the extractant flow rate, and u 3 indicates the washing agent flow rate, while u 4 and u 5 , respectively, denote the component flow rates of the rare-earth feed solution.
In practical production scenarios, numerous variables can influence component content; however, only a limited number of auxiliary variables can be accurately measured in real industrial processes. These primarily include feed concentration, scrubber flow rate, and extractant flow rate, among a few other parameters [6]. Therefore, it can be established that there exists a nonlinear mapping relationship between the rare-earth element component content ρ and the extractant flow rate V O , scrubber flow rate V W , feed flow rate V F , and feed concentration X F , which is expressed by Equation (20) as follows:
ρ = f ( V O , V W , V F , X F )
The dataset used in this study was collected from the P r / N d extraction section of a rare-earth separation enterprise. It consists of 200 data samples gathered from multiple monitoring points within the same production line during the same time period. According to the process control requirements of the extraction production, a specific stage in the extraction section was selected as the monitoring point to measure the aqueous-phase content of P r / N d components. To validate the effectiveness of the proposed IBKA-SCN model, its performance was compared with several commonly used modeling methods for component content prediction, including SVM, GA-BP, SCN, and SGDE-SCN. Among the 200 collected samples, 80% were randomly selected as the training set { X i ,   Y i } = { V O ,   V W ,   V F ,   X F ,   N d i } R 160 × 5 , and the remaining 20% were used as the test set { X ¯ i ,   Y ¯ i } = { V O ,   V W , V F ,   X F ,   N d i } R 40 × 5 . All input and output variables were normalized to the range [ 0 , 1 ] . The performance of each model was evaluated using the following metrics: R M S E (as shown in Equation (19)), Mean Absolute Error ( M A E ) and Sum of Squared Errors ( S S E ), as shown in Equations (21) and (22), and the training time (T). These indicators collectively assess the accuracy and efficiency of the predictive models.
M A E = i = 1 N ( y i y ^ i ) N
S S E = i = 1 N ( y i y ^ i ) 2
Here, y i denotes the predicted output of the model, y ^ i represents the actual value of the sample, and i = 1 , 2 , . . . , N , N is the number of samples.
For the SVM model, the penalty coefficient was set to C = 5 and the kernel parameter to γ = 0.7 . For the GA-BP (a BP neural network optimized by a genetic algorithm), the maximum number of hidden nodes was set to 8, the learning rate to 0.01, and the maximum number of iterations to 1000. In the case of SCN and its variants, the hyperparameter control sets for λ and r were defined as 0.5 ,   1 ,   5 ,   10 ,   30 ,   50 ,   100 ,   150 ,   200 ,   250 and 0.9 ,   0.99 ,   0.999 ,   0.9999 ,   0.99999 , respectively. For IBKA-SCN, the lower and upper bounds for λ were set to 0.5 and 250, respectively, and for r, to 0.9 and 0.9999. The performance of each predictive model on the component content dataset is summarized in Table 8. The model prediction output, alongside the actual values, is displayed in Figure 6.
According to the experimental results presented in Figure 6 and Table 8, IBKA-SCN achieves the lowest values in RMSE, MAE, and SSE, indicating that its prediction accuracy surpasses that of SVM, GA-BP, SCN, and SGDE-SCN. Meanwhile, the training time of IBKA-SCN remains around 2.4 s, which is significantly lower than that of SGDE-SCN and SVM and is comparable to SCN while delivering a substantial improvement in accuracy. In summary, while retaining the advantages of SCN—such as structural simplicity and fast convergence—IBKA-SCN further optimizes the hyperparameter combination through IBKA to better adapt to the specific dataset. This leads to simultaneous enhancement in both prediction precision and training efficiency, demonstrating the feasibility and effectiveness of the proposed method in practical applications.

6. Conclusions

The accurate and efficient prediction of component content in the rare-earth extraction and separation process plays a decisive role in the design of control systems, product quality management, and energy optimization. In response to this key issue, this paper proposes the IBKA-SCN soft sensor model for predicting component content. The main contributions are summarized as follows:
  • To overcome the limitations of the Black-winged Kite Algorithm (BKA) in terms of population diversity and global optimization capability, an Improved Black-winged Kite Algorithm (IBKA) is developed by introducing good point set initialization and Lévy flight strategies. The model exhibits optimal convergence accuracy across a series of five function regression tasks. A convergence analysis is provided to substantiate its theoretical validity.
  • To address the reliance on manually configured hyperparameters in the Stochastic Configuration Network (SCN), which often limits model accuracy, IBKA is employed to adaptively optimize the hyperparameter combinations for SCN. The proposed IBKA-SCN method constructs the network via stochastic configuration after identifying the optimal hyperparameters. Extensive experiments on four real-world regression datasets demonstrate that IBKA-SCN outperforms other benchmark models, confirming its superior generalization capability and effectiveness.
  • This study constitutes the first attempt to integrate the improved BKA with the SCN framework for hyperparameter optimization in the practical application of soft sensing for rare-earth component content. The results of the test indicate that IBKA-SCN achieves the smallest values of RMSE, MAE, and SSE, indicating the highest predictive accuracy. Furthermore, it requires significantly less training time compared to SGDE-SCN and SVM, validating its feasibility and efficiency in real industrial scenarios.
In conclusion, the IBKA-SCN model proposed in this study not only provides a novel methodological framework for hyperparameter optimization in randomized neural networks but also offers significant engineering value in the accurate prediction of component content in rare-earth extraction processes. This work establishes a solid foundation for future applications such as online soft sensing, real-time control, and the development of digital twins for full-process optimization.
This paper acknowledges certain limitations in the method proposed and suggests feasible directions for future research. Current methods cannot adaptively adjust parameters of hidden layer nodes or add/remove nodes to enable the model to update itself online.

Author Contributions

Conceptualization, R.L. and X.H.; methodology, X.H.; software, X.H.; validation, Z.H., L.L. and T.Q.; investigation, C.L.; resources, H.Z. and Z.H.; data curation, R.L.; writing—original draft preparation, X.H.; writing—review and editing, X.H. and R.L.; visualization, X.H.; supervision, R.L. and L.L.; project administration, Z.H. and L.L.; funding acquisition, Z.H., L.L., C.L. and H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by the National Natural Science Foundation of China (No. 22268048), National Key Research and Development Program of China (No. 2022YFB3504300), the Double Thousand Plan of Jiangxi Province (jxsq2020105012), the Natural Science Youth Foundation of Jiangxi Province (No. 20212BAB213031), and the General Project of the Key Research and development Program of Ganzhou (No. 202101124952 and 202101124954), as well as with a grant from the Research Projects of Jiangxi Institute of Rare Earths, Chinese Academy of Sciences (E055ZA01).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The public datasets contained in this paper are available at https://sci2s.ugr.es/keel/datasets.php (accessed on 15 May 2025). The rare-earth dataset is withheld due to industry confidentiality.

Conflicts of Interest

Author Liangfang Liao was employed by the company China Rare Earth Jiangxi Rare Earth Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SCNStochastic Configuration Network
BKABlack-winged Kite Algorithm
IBKAImproved Black-winged Kite Algorithm
RMSERoot Mean Square Error
MAEMean Absolute Error
SSESum of Squared Errors
SVMsSupport Vector Machines
GA-BPBP Neural Network Optimized by Genetic Algorithm

References

  1. Xu, F.P.; Yang, H.; Chen, J.; Jian, Y.Z.; Rong, X.L. Soft sensing of rare-earth element component contents based on transfer learning and residual attention convolutional network. CIESC J. 2025, 76, 1647–1660. [Google Scholar]
  2. Zhang, N.; Huang, C.; Hu, B. ICP-AES determination of trace rare earth elements in environmental and food samples by on-line separation and preconcentration with acetylacetone-modified silica gel using microcolumn. Anal. Sci. 2007, 23, 997–1002. [Google Scholar] [CrossRef]
  3. Tan, D.; Zhu, J.M.; Wang, X.; Han, G.; Xu, W. High-sensitivity determination of Cd isotopes in low-Cd geological samples by double spike MC-ICP-MS. J. Anal. At. Spectrom. 2020, 35, 713–727. [Google Scholar] [CrossRef]
  4. Zawisza, B.; Pytlakowska, K.; Feist, B.; Polowniak, M.; Kita, A.; Sitko, R. Determination of rare earth elements by spectroscopic techniques: A review. J. Anal. At. Spectrom. 2011, 26, 2373–2390. [Google Scholar] [CrossRef]
  5. Yuan, J.; Shen, J.L.; Liu, J.K.; Rong, H.Z. Determination of rare-earth elements in geological samples by high-energy polarized energy-dispersive X-ray fluorescence spectrometry. Spectrosc. Spectr. Anal. 2018, 38, 5. [Google Scholar]
  6. Tian, H.; Guo, Z.H.; Li, L.Y. Research on soft-sensing method for rare-earth extraction separation process. J. Chin. Rare Earth Soc. 2015, 33, 201–205. [Google Scholar]
  7. Xiang, Z.R.; Liu, S.Q. Component content soft-sensor in rare-earth extraction based on LS-SVM. J. Chin. Rare Earth Soc. 2009, 27, 132–136. [Google Scholar]
  8. Zhu, J.Y.; Zhang, X.Q.; Yang, H.; Rong, X.L. Soft-sensing of Pr/Nd component content under different single illumination conditions. CIESC J. 2019, 70, 780–788. [Google Scholar]
  9. Lu, R.X.; Deng, B.; Yang, H.; Jian, Y.Z.; Gang, Y.; Wen, J.D. Pr/Nd component content prediction based on improved GRA–just-in-time learning algorithm. Control Decis. 2024, 39, 458–466. [Google Scholar]
  10. Zhang, S.P.; Zhang, Q.H.; Wang, B.; Xiao, L.Z.; Qiao, F.L.; Hao, R.G. Rare earth element component content prediction based on deep machine vision. Nonferrous Met. Sci. Eng. 2023, 14, 587–596. [Google Scholar]
  11. Wang, D.; Li, M. Stochastic configuration networks: Fundamentals and algorithms. IEEE Trans. Cybern. 2017, 47, 3466–3479. [Google Scholar] [CrossRef] [PubMed]
  12. Li, K.; Wang, W.; Lin, S.H. Soft measurement of ammonia nitrogen concentration based on GA-SCN. In Proceedings of the 2018 IEEE Symposium on Product Compliance Engineering-Asia (ISPCE-CN), Shenzhen, China, 5–7 December 2018; pp. 1–4. [Google Scholar]
  13. Wang, Y.; Li, W.; Chen, H.; Ma, Y.; Yu, B.; Yu, Y. Short-term photovoltaic power forecasting based on an improved zebra optimization algorithm–stochastic configuration network. Sensors 2025, 25, 2968. [Google Scholar] [CrossRef]
  14. Cao, W.; Xie, Z.; Li, J.; Xu, Z.; Ming, Z.; Wang, X. Bidirectional stochastic configuration network for regression problems. Neural Netw. 2021, 140, 237–246. [Google Scholar] [CrossRef]
  15. Zhao, L.J.; Zou, S.D.; Guo, S.; Huang, M.Z. Operating condition identification of ball mill based on regularized stochastic configuration network. Control Eng. China 2020, 27, 1–7. [Google Scholar]
  16. Li, M.; Wang, D. 2-D stochastic configuration networks for image data analytics. IEEE Trans. Cybern. 2021, 52, 354–366. [Google Scholar] [CrossRef]
  17. Dai, W.; Li, D.; Zhou, P.; Chai, T. Stochastic configuration networks with block increments for data modeling in process industries. Inf. Sci. 2019, 484, 367–386. [Google Scholar] [CrossRef]
  18. Lu, R.X.; Hu, X.R.; Pei, C.; Yang, H.; Dai, W.H.; Zhu, J.Y. Optimization strategy for batch-stochastic configuration network models and their application in component content prediction. Eng. Appl. Artif. Intell. 2025, 150, 107261. [Google Scholar] [CrossRef]
  19. Felicetti, M.J.; Wang, D. Stochastic configuration networks with particle swarm optimisation search. Inf. Sci. 2024, 677, 120868. [Google Scholar] [CrossRef]
  20. Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  21. Lancaster, P.; Tismenetsky, M. The Theory of Matrices: With Applications; Academic Press: New York, NY, USA, 1985. [Google Scholar]
  22. Li, J.; An, Q.; Lei, H.; Deng, Q.; Wang, G.G. Survey of Lévy flight-based metaheuristics for optimization. Mathematics 2022, 10, 2785. [Google Scholar] [CrossRef]
  23. Prokhorov, A.V. Borel–Cantelli lemma. In Encyclopaedia of Mathematics; Hazewinkel, M., Ed.; Springer: Dordrecht, The Netherlands, 2001. [Google Scholar]
  24. Xia, H.; Ke, Y.; Liao, R.; Zhang, H. Fractional order dung beetle optimizer with reduction factor for global optimization and industrial engineering optimization problems. Artif. Intell. Rev. 2025, 58, 308. [Google Scholar] [CrossRef]
  25. Dai, W.; Ao, Y.; Zhou, L.; Zhou, P.; Wang, X. Incremental learning paradigm with privileged information for random vector functional-link networks: IRVFL. Neural Comput. Appl. 2022, 34, 6847–6859. [Google Scholar] [CrossRef]
Figure 1. Scheme of the article research.
Figure 1. Scheme of the article research.
Applsci 15 10880 g001
Figure 2. Basic structure of SCN.
Figure 2. Basic structure of SCN.
Applsci 15 10880 g002
Figure 3. Comparison of step sizes for two position update methods.
Figure 3. Comparison of step sizes for two position update methods.
Applsci 15 10880 g003
Figure 4. IBKA algorithm flowchart.
Figure 4. IBKA algorithm flowchart.
Applsci 15 10880 g004
Figure 5. Process flow of cascaded solvent extraction.
Figure 5. Process flow of cascaded solvent extraction.
Applsci 15 10880 g005
Figure 6. Model output alongside the actual values.
Figure 6. Model output alongside the actual values.
Applsci 15 10880 g006
Table 1. Three multimodal standard test functions.
Table 1. Three multimodal standard test functions.
Function ExpressionsRangeTarget
M1 f ( x ) = ( x 2 + 47 ) sin ( x 2 + x 1 2 + 47 ) x 1 sin ( x 1 ( x 2 + 47 ) ) [−512, 512]−959.6407
M2 f ( x ) = sin ( x 1 ) cos ( x 2 ) exp 1 ( x 1 2 + x 2 2 ) π [−10,10]−19.2085
M3 f ( x ) = 0.0001 · sin ( x ) · sin ( y ) · exp 100 x 2 + y 2 π + 1 0.1 [−10, 10]2.0626
M4 f ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 [−5, 5]0.0003075
M5 f ( x ) = i = 1 30 x i sin ( | x i | ) [−500, 500]−12,569.5
Table 2. Comparison of optimization results for multimodal functions.
Table 2. Comparison of optimization results for multimodal functions.
FunctionMetricsIBKABKAGWOSSAWOA
M1RMSE 8.1584 × 10 9 9.0435 × 10 9 7.2487 × 10 8 5.1600 × 10 2 3.7279 × 10 5
MAE 1.0665 × 10 8 1.2603 × 10 6 1.3608 × 10 6 6.3480 × 10 1 2.7225 × 10 3
LAE 1.8956 × 10 9 3.1679 × 10 9 3.4620 × 10 9 8.2416 × 10 4 3.7279 × 10 5
M2RMSE 1.0548 × 10 10 1.5380 × 10 10 8.3952 × 10 7 4.6907 × 10 3 2.3760 × 10 5
MAE 1.0022 × 10 9 1.1134 × 10 9 2.4675 × 10 6 9.5031 × 10 3 1.1878 × 10 4
LAE 2.4883 × 10 12 2.9042 × 10 12 3.3003 × 10 7 2.5589 × 10 4 2.5679 × 10 6
M3RMSE 2.3559 × 10 9 3.0132 × 10 9 4.1428 × 10 8 3.7758 × 10 6 3.0638 × 10 4
MAE 1.4311 × 10 8 1.6315 × 10 8 6.3726 × 10 7 1.0729 × 10 5 1.3755 × 10 3
LAE 1.2648 × 10 9 1.8841 × 10 9 2.2204 × 10 9 1.2456 × 10 7 5.1934 × 10 6
M4RMSE 4.1853 × 10 4 6.3481 × 10 4 2.7000 × 10 3 9.9131 × 10 4 1.1000 × 10 3
MAE 1.0565 × 10 3 2.1531 × 10 3 2.0700 × 10 2 1.3000 × 10 3 2.5000 × 10 3
LAE 4.1863 × 10 4 6.1541 × 10 4 6.1551 × 10 4 6.8972 × 10 4 6.1499 × 10 4
M5RMSE 1.4845 × 10 2 3.1511 × 10 1 6.1013 × 10 3 9.3346 × 10 3 6.9420 × 10 3
MAE 1.1436 × 10 2 1.9411 × 10 1 1.1130 × 10 4 9.8427 × 10 3 7.0319 × 10 3
LAE 1.1574 × 10 2 1.4951 × 10 2 5.3794 × 10 3 8.6424 × 10 3 6.6720 × 10 3
Table 3. Regression datasets.
Table 3. Regression datasets.
DatasetAttributesOutputInstances
Stock91950
Concrete811030
Computer activity2118192
Pole26113,750
Table 4. Performance comparison among IRVFL, SCN, SSA-SCN, and IBKA-SCN on stock dataset.
Table 4. Performance comparison among IRVFL, SCN, SSA-SCN, and IBKA-SCN on stock dataset.
AlgorithmsTrainingTestingT (s)T (s)
L = 25L = 50L = 25L = 50(L = 25)(L = 50)
IRVFL 0.2374 ± 0.0364 0.1966 ± 0.0298 0.2189 ± 0.0569 0.1817 ± 0.0686 0.0071 0.0147
SCN 0.0442 ± 0.0009 0.0407 ± 0.0004 0.0425 ± 0.0014 0.0417 ± 0.0014 0.1304 0.3075
BSCN 0.0468 ± 0.0022 0.0423 ± 0.0026 0.0457 ± 0.0027 0.0436 ± 0.0025 0.0979 0.2054
WOA-SCN 0.0438 ± 0.0017 0.0402 ± 0.0008 0.0427 ± 0.0010 0.0422 ± 0.0021 0.0857 0.2010
GWO-SCN 0.0440 ± 0.0013 0.0401 ± 0.0009 0.0429 ± 0.0009 0.0411 ± 0.0024 0.0802 0.1921
BKA-SCN 0.0433 ± 0.0010 0.0398 ± 0.0008 0.0421 ± 0.0009 0.0407 ± 0.0026 0.0818 0.1977
IBKA1-SCN 0.0431 ± 0.0011 0.0396 ± 0.0005 0.0419 ± 0.0010 0.0405 ± 0.0023 0.0834 0.1989
IBKA2-SCN 0.0428 ± 0.0007 0.0395 ± 0.0012 0.0420 ± 0.0009 0.0404 ± 0.0022 0.0820 0.1980
IBKA-SCN 0.0425 ± 0.0013 0.0394 ± 0.0019 0.0417 ± 0.0011 0.0397 ± 0.0019 0.0897 0.1987
Table 5. Performance comparison among IRVFL, SCN, SSA-SCN, and IBKA-SCN on concrete dataset.
Table 5. Performance comparison among IRVFL, SCN, SSA-SCN, and IBKA-SCN on concrete dataset.
AlgorithmsTrainingTestingT (s)T (s)
L = 25L = 50L = 25L = 50(L = 25)(L = 50)
IRVFL 0.2054 ± 0.0213 0.1942 ± 0.0128 0.1990 ± 0.0195 0.1895 ± 0.0136 0.0126 0.0348
SCN 0.0987 ± 0.0036 0.0825 ± 0.0010 0.1053 ± 0.0024 0.1034 ± 0.0026 0.1834 0.4922
BSCN 0.1027 ± 0.0056 0.0901 ± 0.0022 0.1137 ± 0.0038 0.1098 ± 0.0039 0.1030 0.1728
WOA-SCN 0.0977 ± 0.0033 0.0823 ± 0.0028 0.1018 ± 0.0025 0.1009 ± 0.0016 0.0979 0.2024
GWO-SCN 0.0981 ± 0.0056 0.0821 ± 0.0021 0.1018 ± 0.0015 0.1013 ± 0.0014 0.0988 0.1994
BKA-SCN 0.0984 ± 0.0043 0.0818 ± 0.0018 0.1021 ± 0.0024 0.1012 ± 0.0021 0.0984 0.1977
IBKA1-SCN 0.0980 ± 0.0013 0.0822 ± 0.0012 0.1002 ± 0.0015 0.0995 ± 0.0021 0.1007 0.2026
IBKA2-SCN 0.0977 ± 0.0012 0.0819 ± 0.0018 0.1990 ± 0.0195 0.0989 ± 0.0021 0.1012 0.2063
IBKA-SCN 0.0974 ± 0.0013 0.0814 ± 0.0019 0.0997 ± 0.0019 0.0983 ± 0.0019 0.0997 0.2054
Table 6. Performance comparison among IRVFL, SCN, SSA-SCN, and IBKA-SCN on computer activity dataset.
Table 6. Performance comparison among IRVFL, SCN, SSA-SCN, and IBKA-SCN on computer activity dataset.
AlgorithmsTrainingTestingT (s)T (s)
L = 25L = 50L = 25L = 50(L = 25)(L = 50)
IRVFL 0.2443 ± 0.0025 0.2122 ± 0.0024 0.2765 ± 0.0025 0.1843 ± 0.0024 0.0501 0.0713
SCN 0.0755 ± 0.0005 0.0416 ± 0.0003 0.0773 ± 0.0006 0.0441 ± 0.0005 0.8786 2.4618
SCN 0.0807 ± 0.0016 0.0464 ± 0.0012 0.0827 ± 0.0009 0.0488 ± 0.0008 0.3421 0.8618
WOA-SCN 0.0559 ± 0.0047 0.0434 ± 0.0023 0.0533 ± 0.0086 0.0481 ± 0.0052 0.4225 1.0334
GWO-SCN 0.0536 ± 0.0045 0.0398 ± 0.0013 0.0516 ± 0.0077 0.0446 ± 0.0031 0.4164 0.9840
BKA-SCN 0.0584 ± 0.0056 0.0418 ± 0.0013 0.0521 ± 0.0074 0.0472 ± 0.0042 0.4179 1.0240
IBKA1-SCN 0.0502 ± 0.0024 0.0376 ± 0.0016 0.0522 ± 0.0086 0.0424 ± 0.0044 0.4489 1.0090
IBKA2-SCN 0.0488 ± 0.0047 0.0361 ± 0.0015 0.0509 ± 0.0066 0.0403 ± 0.0039 0.4196 0.9760
IBKA-SCN 0.0474 ± 0.0043 0.0341 ± 0.0011 0.0497 ± 0.0068 0.0393 ± 0.0039 0.4233 0.9696
Table 7. Performance comparison among IRVFL, SCN, SSA-SCN, and IBKA-SCN on pole dataset.
Table 7. Performance comparison among IRVFL, SCN, SSA-SCN, and IBKA-SCN on pole dataset.
AlgorithmsTrainingTestingT (s)T (s)
L = 25L = 50L = 25L = 50(L = 25)(L = 50)
IRVFL 0.1355 ± 0.0373 0.2122 ± 0.0334 0.1446 ± 0.0464 0.1497 ± 0.0314 0.0336 0.0713
SCN 0.0997 ± 0.0037 0.1072 ± 0.0056 0.1044 ± 0.0047 0.1043 ± 0.0042 2.1984 5.5403
BSCN 0.1040 ± 0.0061 0.1127 ± 0.0078 0.1096 ± 0.0046 0.1092 ± 0.0050 0.9382 5.5403
WOA-SCN 0.0672 ± 0.0039 0.0658 ± 0.0027 0.0781 ± 0.0076 0.0756 ± 0.0036 0.6996 1.4723
GWO-SCN 0.0667 ± 0.0040 0.0648 ± 0.0023 0.0749 ± 0.0065 0.0744 ± 0.0038 0.7022 1.4843
BKA-SCN 0.0683 ± 0.0041 0.0645 ± 0.0023 0.0773 ± 0.0085 0.0747 ± 0.0042 0.6963 1.4663
IBKA1-SCN 0.0662 ± 0.0046 0.0651 ± 0.0041 0.0735 ± 0.0055 0.0756 ± 0.0048 0.6998 1.4971
IBKA2-SCN 0.0649 ± 0.0037 0.0644 ± 0.0022 0.0723 ± 0.0065 0.0739 ± 0.0052 0.7052 1.4403
IBKA-SCN 0.0644 ± 0.0043 0.0641 ± 0.0021 0.0696 ± 0.0063 0.0733 ± 0.0058 0.6969 1.4132
Table 8. Performance comparison of different algorithms.
Table 8. Performance comparison of different algorithms.
Comparative AlgorithmsRMSEMAESSET (s)
SVM4.98063.85881041.8721.033
GA-BP4.91894.00781051.516.021
SCN0.61790.333960.72084.276
SGDE-SCN0.40210.274143.037617.33
IBKA-SCN0.38720.266742.31542.362
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Z.; Liao, L.; Liao, C.; Zhang, H.; Qi, T.; Lu, R.; Hu, X. Prediction Model of Component Content Based on Improved Black-Winged Kite Algorithm-Optimized Stochastic Configuration Network. Appl. Sci. 2025, 15, 10880. https://doi.org/10.3390/app152010880

AMA Style

Huang Z, Liao L, Liao C, Zhang H, Qi T, Lu R, Hu X. Prediction Model of Component Content Based on Improved Black-Winged Kite Algorithm-Optimized Stochastic Configuration Network. Applied Sciences. 2025; 15(20):10880. https://doi.org/10.3390/app152010880

Chicago/Turabian Style

Huang, Zhaohui, Liangfang Liao, Chunfa Liao, Hui Zhang, Tao Qi, Rongxiu Lu, and Xingrong Hu. 2025. "Prediction Model of Component Content Based on Improved Black-Winged Kite Algorithm-Optimized Stochastic Configuration Network" Applied Sciences 15, no. 20: 10880. https://doi.org/10.3390/app152010880

APA Style

Huang, Z., Liao, L., Liao, C., Zhang, H., Qi, T., Lu, R., & Hu, X. (2025). Prediction Model of Component Content Based on Improved Black-Winged Kite Algorithm-Optimized Stochastic Configuration Network. Applied Sciences, 15(20), 10880. https://doi.org/10.3390/app152010880

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop