Next Article in Journal
The Impact of Grid Distortion on the Power Conversion Harmonics of AC/DC Converters in the Supraharmonic Range
Previous Article in Journal
Applying “Two Heads Are Better Than One” Human Intelligence to Develop Self-Adaptive Algorithms for Ridesharing Recommendation Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature Selection for Data Classification in the Semiconductor Industry by a Hybrid of Simplified Swarm Optimization

Department of Industrial Engineering and Engineering Management, National Tsing Hua University, P.O. Box 24-60, Hsinchu 300, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(12), 2242; https://doi.org/10.3390/electronics13122242
Submission received: 28 April 2024 / Revised: 27 May 2024 / Accepted: 28 May 2024 / Published: 7 June 2024

Abstract

:
In the semiconductor manufacturing industry, achieving high yields constitutes one of the pivotal factors for sustaining market competitiveness. When confronting the substantial volume of high-dimensional, non-linear, and imbalanced data generated during semiconductor manufacturing processes, it becomes imperative to transcend traditional approaches and incorporate machine learning methodologies. By employing non-linear classification models, one can achieve more real-time anomaly detection, subsequently facilitating a deeper analysis of the fundamental causes behind anomalies. Given the considerable dimensionality of production line data in semiconductor manufacturing, there arises a necessity for dimensionality reduction to mitigate noise and reduce computational costs within the data. Feature selection stands out as one of the primary methodologies for achieving data dimensionality reduction. Utilizing wrapper-based heuristics algorithms, although characterized by high time complexity, often yields favorable performance in specific cases. If further combined into hybrid methodologies, they can concurrently satisfy data quality and computational cost considerations. Accordingly, this study proposes a two-stage feature selection model. Initially, redundant features are eliminated using mutual information to reduce the feature space. Subsequently, a Simplified Swarm Optimization algorithm is employed to design a unique fitness function aimed at selecting the optimal feature subset from candidate features. Finally, support vector machines are utilized as the classification model for validation purposes. For practical cases, it is evident that the feature selection method proposed in this study achieves superior classification accuracy with fewer features in the context of wafer anomaly classification problems. Furthermore, its performance on public datasets further substantiates the effectiveness and generalization capability of the proposed approach.

1. Introduction

Semiconductor wafer manufacturing is a capital-intensive and technology-driven industry. Technological advancements, following Moore’s Law, lead to exponential growth in circuit density over time [1]. As circuit density increases, semiconductor processes become more precise and intricate, escalating equipment and material costs. Undetected defective products on the production line consume resources without adding value, increasing costs and wasting capacity. Improving product yield is crucial for enhancing capacity and reducing costs in semiconductor manufacturing [2]. It also enhances product quality and reliability while alleviating pressures from R&D, equipment, and material costs.
Maintaining high yield requires robust quality control and real-time detection of production line anomalies. Semiconductor production is divided into front-end and back-end processes. The front-end process involves producing silicon wafers and fabricating integrated circuit components, while the back-end process includes the assembly, packaging, and final testing of individual dies. The front-end process is intricate, comprising hundreds of steps and about 80% of the total semiconductor manufacturing process [3]. After manufacturing, wafers undergo wafer acceptance tests (WAT) and chip probing (CP) to measure their electrical characteristics and functionality, determining their suitability for the back-end process.
This study focuses on the data analysis of parameters in the front-end semiconductor manufacturing process and wafer test measurements. Using WAT and CP data as input and identifying abnormal results from process machines as output, classification models predict abnormalities in specific processes during wafer manufacturing. The model analysis aims to identify key wafer test parameters relevant to specific processes, aiding production line optimization.
Traditionally, abnormality detection in wafer manufacturing employs statistical process control (SPC). Since its inception by Shewhart in the early 1930s, SPC has been widely used for process control and improvement in manufacturing. SPC uses control charts to monitor quality characteristics, setting upper and lower control limits to detect process abnormalities. However, due to the nonlinear nature of many abnormalities in semiconductor manufacturing, traditional linear SPC methods often fail to detect them timely. This leads to undetected issues at workstations, accumulating problems as wafers pass through multiple steps, resulting in unnecessary processing costs and ineffective judgments like overkill or underkill [4]. Consequently, additional manpower is required to redefine control limits and reclassify abnormal messages based on experience.
Overkill and underkill are key indicators in quality control, evaluating misjudgments during wafer testing. Overkill refers to normal wafers being classified as abnormal, while underkill refers to abnormal wafers being classified as normal. Both lead to losses for semiconductor manufacturers: overkill results in intercepted normal products, while underkill increases processing costs and wastes resources. Underkill can also impact the company’s reputation by shipping defective products to customers, especially for those emphasizing high yield rates.
As market demands and customization requirements rise, coupled with the complexity of semiconductor processes involving numerous steps and parameters, managing nearly a million daily control charts becomes challenging. Wafers passing testing may later show anomalies reported by customers, requiring engineers to retrospectively search for process issues, incurring significant time costs and impacting reputation.
Given the high complexity and yield requirements in semiconductor manufacturing, traditional SPC methods struggle with non-linearity, high dimensionality, and class imbalance in production data [5]. Machine learning methods, introduced in recent decades, offer better predictions for anomaly detection by learning from massive datasets.
Early efforts to address SPC’s limitations employed techniques like principal component analysis (PCA) and partial least squares (PLS) [4], but these methods struggled with real-time monitoring due to the need for manual adjustments in semiconductor processes. Machine learning methods like K-nearest neighbor (KNN) [6], support vector machine (SVM) [5], decision tree (DT) [7], and neural networks [8,9] have shown better performance in anomaly detection.
Feature engineering techniques are crucial for reducing data dimensionality and mitigating noise, which is essential for high-dimensional data. Feature selection, a subset of feature engineering, selects the most relevant features, maintaining interpretability and facilitating further analysis [10]. Traditional feature selection relies on engineers’ domain expertise, but modern methods combine filter and heuristic algorithms for improved accuracy and efficiency. Hybrid methods use filter approaches to narrow features and heuristic algorithms to find optimal subsets within limited timeframes, enhancing accuracy and reducing computational time [11].
Class imbalance, which is common in semiconductor manufacturing, complicates classification problems. Methods to address this imbalance include undersampling, oversampling, feature engineering, and designing specialized algorithms [12,13]. Research focuses on predicting wafer yield, detecting machine failures, and analyzing key process parameters for real-time adjustments [14,15,16]. This study extends previous work by classifying quality into three categories, normal, abnormal, and at-risk, using WAT and CP data.
This study aims to use heuristic algorithm-based feature selection combined with machine learning to propose an anomaly detection model for high-dimensional, nonlinear, and imbalanced data. The objectives are as follows:
  • Propose a hybrid feature selection method combining mutual information (MI) with a simplified swarm optimization (SSO) algorithm using non-binary encoding. This method reduces data dimensionality and accurately selects key factors for anomaly prediction.
  • Develop an anomaly detection approach suitable for multivariate, imbalanced data that is applied to real-world cases for precise, real-time wafer quality management.
With over six hundred WAT parameters, identifying key parameters through feature selection helps engineers understand anomaly causes. This study leverages heuristic algorithms and machine learning to improve anomaly detection in semiconductor manufacturing.

2. Preliminary Issues

This study addresses the issue of abnormal wafer detection, which falls within the domain of imbalanced classification problems. This section elucidates the definition and implications of this issue through discussions on feature selection, imbalanced data classification models, and relevant simplified swarm optimization (SSO) algorithms, forming the foundational basis for the methodology employed in this research.

2.1. Feature Selection

Given the high complexity of semiconductor manufacturing processes, it is imperative to perform dimensionality reduction on the data. This allows the identification of the parameters that have the most significant impacts on analytical outcomes from among thousands of manufacturing and testing parameters. Feature selection is a common method for data dimensionality reduction that is widely applied in classification, data mining, and object detection [17]. It enhances the precision of machine learning models, prevents overfitting, and reduces computational costs [18]. Feature selection entails identifying a subset of features from the original set to maximize relevance and minimize redundancy. It can be categorized into three types: filter, wrapper, and embedded methods.

2.1.1. Filter Methods

Filters are among the earliest feature selection methods [19]. They evaluate features prior to model learning, focusing on the data’s characteristics using statistical methods such as information gain, distance, consistency, similarity, or statistical metrics [20,21]. These methods are independent of classification models. For example, the Relief method, based on instance learning, uses the Euclidean distance to calculate feature scores [22], although it has limitations with binary classification and missing data. Improved versions like Relief-A, Relief-B, and Relief-F address these issues [23,24]. Filter methods based on information theory and correlation coefficients, such as information gain (IG), mutual information (MI), and joint mutual information (JMI), have been extensively researched [11,25,26,27,28,29].

2.1.2. Wrappers

Wrappers combine learning model feedback during feature selection. They estimate the impacts of adding or removing features based on the error rate or accuracy from the training model, searching the feature space for the best predictive performance [19]. Due to the NP-hard nature of feature selection [30], heuristic algorithms like genetic algorithms and particle swarm optimization are often used [31,32]. Although wrappers achieve higher accuracy, they have high time complexity and may overfit [33].

2.1.3. Embedded Methods

Embedded methods integrate feature selection with learning, optimizing both machine learning and feature selection parameters simultaneously [10]. These methods reduce features during model training using regularization or activation functions [18,34]. They balance accuracy and computational costs better than wrappers but still carry overfitting risks and require redesigning for specific algorithms [11].

2.1.4. Hybrid Methods

Hybrid methods combine filters with wrappers to enhance feature selection efficiency and accuracy. They first use filters to reduce dimensionality and then apply wrappers for precise searches. This approach has shown significant performance improvements in various domains like text classification and semiconductor manufacturing yield prediction [35,36,37].
In semiconductor manufacturing, feature selection methods are more appropriate than feature extraction for addressing anomalies in wafer yield, maintaining data interpretability after dimensionality reduction.

2.2. Classification Algorithms

Machine learning develops algorithms that simulate human intelligence, adjusting function structures dynamically through iterative learning. It can substitute repetitive tasks and identify data regularities overlooked by humans [38]. Machine learning is categorized into supervised, unsupervised, and semi-supervised learning. This study primarily employs supervised learning due to its wide applicability and good performance across various domains.

2.2.1. K-Nearest Neighbor (KNN)

The K-nearest neighbor algorithm (KNN) predicts the class of unknown data points based on nearby known samples. It calculates distances between a new data point and training samples, classifying the new point based on the major class of the nearest neighbors. KNN is straightforward, using parameters like the integer k, labeled data, and a distance formula. Despite high computational costs and sensitivity to noise, KNN is widely applied in domains such as data mining and image processing. In semiconductor manufacturing, KNN is used for online fault detection and failure detection [39].
This study aims to utilize heuristic algorithm-based feature selection methods combined with machine learning techniques to propose an anomaly detection model tailored to high-dimensional, nonlinear, and imbalanced data. The objectives are as follows:
When combining a filter with a heuristic algorithm—the simplified swarm optimization (SSO) algorithm—and adopting a non-binary encoding approach, we propose a hybrid feature selection method. This method not only reduces data dimensionality but also accurately selects the crucial influencing factors for anomaly prediction.
When integrating feature selection methods, we propose an anomaly detection approach suitable for multivariate, imbalanced data. This method is applied to real-world cases to achieve more precise and real-time wafer quality management.

2.2.2. Support Vector Machine (SVM)

Support vector machine (SVM), proposed by Boser, Guyon, and Vapnik in 1992 [40], has matured significantly over more than a decade of development. Today, SVM stands out as one of the most widely applied algorithms in machine learning due to its robustness, unique global optimal solution, and excellent generalization capability. Traditional linear algorithms may yield suboptimal results when faced with nonlinear data distributions. SVM addresses this issue by seeking the optimal hyperplane in the feature space for classification. By increasing the dimensionality of the feature space and then conducting segmentation, SVM achieves better classification results.
SVM is a kernel technique that uses kernel methods to map data into a higher-dimensional space. It treats machine learning tasks as convex function optimization problems, aiming to find the optimal solution through computation rather than heuristic algorithms. Upon completion of the SVM computation, support vectors are obtained, each defining the boundary of a hyperplane. The complexity of the problem affects the number of support vectors. SVM imposes additional constraints on optimization problems: the hyperplane must be positioned at the maximum distance between different classes, enhancing the generalization capability of SVM [41].
SVM has various variations that can be used to solve classification, regression, or distribution estimation problems. In classification tasks, C-Support Vector Classification (C-SVC) [42,43] is primarily employed. C-SVC transforms the classification problem into a primal optimization problem, assuming a set of training vectors xiRn, i = 1, …, l with corresponding class vectors yRl, where yi ∈ {1, −1}:
M I N w ,   b ,   ξ   1 2 w T w + C i = 1 l ξ i
s u b j e c t   t o   y i ( w T ϕ ( x i ) + b ) 1 ξ i
ξ i 0 ,   i = 1 ,   ,   l
in which, w is a vector; ξi are slack variables, allowing for errors in classification; ϕ(xi) is a function mapping the vector xi into a higher-dimensional space; and C is a positive regularization parameter greater than 0.
Since w is typically high-dimensional, the above primal optimization problem is often converted into a dual problem as shown in Equations (3) and (4) as follows:
M I N α   1 2 α T Q α e T α
s u b j e c t   t o   y T α = 0
0 α i C ,   i = 1 ,   ,   l
where e = [1, …, 1]T is a vector consisting of all ones, Q is a semi-positive definite matrix satisfying QijyiyjK(xi, xj), and K(xi, xj) ≡ ϕ(xi)Tϕ(xj) is a kernel function.
Upon solving the dual problem, according to the primal–dual relationship, it can be inferred that the optimal solution for w satisfies Equation (5):
w = i = 1 l y i α i ϕ ( x i )
The decision function is given by Equation (6):
s g n ( w T ϕ ( x i ) + b ) = s g n ( i = 1 l y i α i K ( x i ,   x ) + b )
In the C-SVC algorithm, the variables that have the greatest impacts on the classification results are C and the choice of kernel function. As C increases, the emphasis is on minimizing the number of misclassified instances; conversely, reducing C allows for more classification bias. There is a wide variety of kernel functions available, with the most commonly used ones including the following:
  • linear kernel— K ( x i , x j ) = x i T . x j
  • polynomial function— K ( x i , x j ) = ( γ x i T x j + r ) q , q > 0
  • hyperbolic tangent, also known as sigmoid— K ( x i , x j ) = tan h ( γ x i T x j + r )
  • Gaussian radial basis function, RBF— K ( x i , x j ) = e γ x i x j 2
SVM has been widely employed across various research domains, including text classification, image classification, bioinformatics, facial recognition, and various predictive tasks [44]. Despite its solid theoretical foundations and strong generalization capabilities, SVM faces challenges such as high computational costs for large datasets and the need to convert multi-class classification problems into binary classification problems before processing.

2.2.3. Decision Tree and Random Forest

When a set of rules is used to partition the predictive target space and can be represented in a tree structure, it is referred to as a decision tree (DT) as shown in Figure 1. DTs consist of branches and nodes, with the root node containing the entire dataset. Each branch represents a rule, and the child nodes (decision nodes or leaf nodes) associated with the branch retain only the data that satisfy the condition of that branch. Rules are established based on the dataset contained within each decision node. Leaf nodes indicate the cessation of branching.
Decision trees offer numerous advantages: they do not require data normalization, have a simple structure and fast computation time, and their models are easily interpretable. However, decision trees are highly sensitive to data, leading to poor robustness and susceptibility to overfitting, and in many applications, their accuracy still cannot match that of other machine learning methods. To address these limitations, Breiman introduced Random Forest (RF) in 2001 [45]. RF is an ensemble learning model based on decision trees, significantly improving prediction performance. It is still widely used today to solve various classification and regression problems.
In tree-based algorithms, the design of splitting rules and pruning strategies significantly influences model performance. Splitting rules directly impact classification outcomes, while pruning strategies determine when to terminate branching, avoiding overfitting due to overly detailed branching. The RF model adopts the Classification and Regression Tree (CART) algorithm [46] as its basis. CART restricts each decision node to perform binary splitting and employs the Gini coefficient as the splitting rule, along with cost-complexity pruning as the pruning rule.
The Gini coefficient (Gini index) is a metric used to measure information impurity. By calculating the extent to which the Gini coefficient decreases after data splitting, we can evaluate the quality of splitting rules and select the condition that maximally reduces impurity as the branching criterion. Suppose we have a dataset D, and the formula for calculating the Gini coefficient is as follows (Equation (7)):
G i n i ( D ) = 1 i = 1 c ( p i ) 2
where pi is a non-zero probability representing the ratio of data of class c in dataset D.
CART and other tree-based algorithms employ the same method of pre-pruning. During the tree branching process, a threshold is set to stop the growth of the decision tree, such as limiting the number of data points in leaf nodes or requiring the reduction of the Gini coefficient to be greater than 0.1. However, while pre-pruning is efficient, it carries a risk of over-pruning. Therefore, after the tree has completed its growth, CART introduces cost-complexity pruning to avoid generating leaf nodes with few samples and increase the decision tree’s tolerance to noise. This is achieved by evaluating the tree using the following equation (Equation (8)):
R α ( T ) = R ( T ) + α   | t | ,   t T
where R(T) represents the sum of residuals for all leaf nodes in a tree, |t| denotes the number of leaf nodes in the tree, and α is a penalty factor. The tree structure with the maximum Rα(T) is selected as the model structure.
Random Forest (RF) constructs multiple CART trees based on these rules. Each tree originates from the same data distribution but operates independently. RF introduces two levels of randomness during training; before the generation of each tree, a random sample of data points is extracted from the original dataset, and a random subset of features is selected from the original feature space to form the tree model. During the prediction phase, for classification problems, RF employs a majority vote based on the results of each CART submodel to obtain the final result. The law of large numbers and randomness in RF effectively prevent overfitting and have shown good predictive accuracy for various classification datasets [45].
In summary, the KNN, SVM, and RF models are all machine learning algorithms and relatively simple non-linear models. These three supervised learning methods have demonstrated good performance across various classification problems [13]. Therefore, in this study, an experiment was designed to select the most suitable classification model from these three algorithms to serve as the evaluation model after feature selection.

2.3. Simplified Swarm Optimization

In numerous practical applications, the issues encountered tend to be relatively complex and frequently fall under the category of NP-hard problems, rendering the determination of optimal solutions within finite timeframes challenging. Consequently, many scholars resort to metaheuristic algorithms—a class of methods that, without guaranteeing feasibility or optimality, aim to find approximate optimal solutions at reasonable computational costs—to address such problems [47]. In the context of feature selection problems, metaheuristic algorithms such as genetic algorithms and particle swarm optimization (PSO) are commonly employed for solution derivation.
Simplified swarm optimization (SSO) was initially proposed by Yeh in 2009 [48] as an enhanced version of particle swarm optimization (PSO) [49], a stochastic optimization method based on swarm intelligence. Swarm intelligence-based approaches typically generate a set of solutions and then update them by following the lead of the best solutions within the swarm. Through continuous updating and iteration, these methods gradually approach the optimal solution.
The core concept of SSO lies in integrating the global best solution (gbest), local best solution (pbest), self-solution, and a random solution at each iteration. The updated position of the solutions is determined by random numbers and parameters predefined within the algorithm. This enables SSO to maintain solution diversity during the search process, thus avoiding convergence to local optima. The specific updating formula is as follows (Equation (9)):
x i j t + 1 = { g b e s t j   i f   ρ i j t [ 0 ,   C g ) p b e s t i j t   i f   ρ i j t [ C g ,   C P ) x i j t   i f   ρ i j t [ C p , C w ) x   i f   ρ i j t [ C w ,   1 )
Here, x i j t + 1 represents the jth variable of the ith solution in the tth iteration; ρ is a uniformly distributed random number in the interval [0, 1]; (Cg, Cp, and Cw) are the hyperparameters of SSO, ranging in the interval (0, 1) with the constraint Cg < Cp < Cw; and x is a random number generated within predefined upper and lower bounds. If ρ falls in the interval [0, Cg), the variable x i j t + 1 is set to the jth variable corresponding to the gbest value. If ρ falls in the interval [Cg, Cp), the variable x i j t + 1 is set to the jth variable corresponding to the best historical solution pbest of the ith solution. If ρ falls in the interval [Cp, Cw), the variable remains the same as in the previous generation. If ρ falls in the interval [Cw, 1), a new variable is randomly generated. The introduction of the random variable x helps maintain diversity in the search process, preventing the results from being trapped in local optima. Furthermore, since (Cg, Cp, and Cw) directly influence the convergence speed and performance of SSO, past studies related to SSO often utilized Taguchi orthogonal arrays to select the optimal parameter combinations.
SSO addresses the shortcomings of PSO in solving discrete problems by designing step functions, thereby mitigating premature convergence and suboptimal performance. Extensive research across various domains, such as task allocation [50,51,52], facility location [53,54], and several practical areas [55,56,57,58], has consistently demonstrated that SSO outperforms traditional PSO or GA in both efficiency and solution quality. Moreover, SSO has been successfully applied to feature selection problems.
Chung and Wahid proposed a hybrid system integrating SSO with specialized preprocessing methods and local search strategies, demonstrating its effectiveness in network intrusion detection classification problems [59]. Lai et al. combined different filters and various wrapper methods derived from SSO, proving that hybrid selection methods significantly improve cancer classification problems [60].

3. The Proposed Approach

The objective of this study is to achieve optimal predictive accuracy with a minimal feature count. Feature selection is conducted in two stages. In the first stage, mutual information (MI) is employed as a filter to assess the relevance of features to the prediction target. Features with lower MI values are removed after ranking, significantly reducing the feature space. In the second stage, simplified swarm optimization (SSO) is utilized to search within the limited feature space. The fitness function is computed based on the results obtained from the classification algorithm. Through the iterative addition and removal of features within feature subsets, an optimal feature combination is identified. The hybrid feature selection method employed in this study is termed MI-SSO.

3.1. Mutual Information

Mutual information (MI) is employed to gauge the interdependence between two variables, utilizing an information entropy estimation based on K-nearest neighbor distances. It yields higher accuracy at lower computational costs compared to conventional bootstrap aggregation (bagging) methods. The specific formula for mutual information is as follows (Equation (10)):
M I ( X ,   Y ) = H ( X ) H ( X | Y ) = H ( Y ) H ( Y | X ) = H ( X ) + H ( Y ) H ( X , Y )
In this context, X represents features, Y denotes the predictive target, H stands for information entropy, H(X|Y) signifies the conditional entropy of X given Y values, and H(X, Y) represents the joint entropy of X and Y. As the MI value increases, it indicates a higher correlation with the predictive target. Following the computation of MI values for all features in accordance with Equation (10), features can be sorted in descending order based on their MI values. Depending on the requirements, the top K features can be retained as initial candidate solutions for subsequent feature selection stages.

3.2. Feature Selection Based on SSO

In the second stage, simplified swarm optimization (SSO) selects the optimal feature subset from the reduced feature space. SSO uses swarm intelligence to explore efficiently, evaluating subsets based on predictive accuracy. By iteratively updating with global best, local best, and random solutions, SSO balances exploration and exploitation, reducing premature convergence and increasing the chance of finding the global optimum.

3.2.1. Particle Encoding Method

When solving the feature selection problem using heuristic algorithms, binary encoding is commonly employed [31,32,61], with the length of the solution equivalent to the number of features. However, when the number of features becomes excessive, binary encoding may result in prolonged solution times. Therefore, when there are constraints on the maximum number of feature subsets, the use of multivariate encoding can shorten the solution length and computational time.
The following description outlines the encoding and decoding process employed in this study: Initially, the K candidate features are sequentially numbered starting from 0 as integers. Under the constraint of selecting at most j features, the particle represents the numerical set of selected feature combinations, where the ith particle can be represented as: X i = ( x 1 i ,   x 2 i , x 3 i , . x j i   ) , subject to the conditions (1) x k i is an integer and (2) 0 x k i < N ,   k j . For example (Figure 2), assuming there are 60 features in the dataset awaiting selection and the intention is to retain only 10% of the features, thus the solution length is set to 6. If a particle X = (34, 5, 28, 5, 5, 11) is given, it indicates the selection of features numbered 34, 5, 28, and 11 from the original feature space, totaling 4 features. From this example, it can be observed that in this encoding method, the occurrence of duplicate values within particles is allowed. Although restricted to selecting at most 6 features, the obtained results may be less than the upper limit. The flexible design allows for the search of fewer feature quantities during the solving process.
Shorter solution lengths can reduce the computational time, but excessively short lengths may overlook optimal solutions. When designing solution lengths, consider historical research and optimal feature quantities from other methods. Flexibility is crucial to avoid overly stringent restrictions. The fitness function controls the precise number of features using penalty functions to prevent excessive parameters.

3.2.2. Fitness Function

As described in Section 3.2.1, the classification performance varies with each feature subset inputted into the classifier. Therefore, a fitness function is needed to evaluate the performance of solutions generated during the iterative updating process of SSO. The fitness function in this study is designed to achieve the best model prediction accuracy with the fewest features. To address this dual-objective problem, we refer to [60] and employ a weighted approach, as shown in Equation (11):
M a x   Fitness ( f ) = α   MCC S C V = k ( C f ) + 1 2 + ( 1 α ) δ ( F ) δ ( f ) δ ( F )
The fitness function consists of two parts: classification accuracy and the number of features. A higher fitness (f) indicates better solution quality. MCC S C V ( C f ) is the average Matthews Correlation Coefficient (MCC) from stratified K-fold cross-validation (SCV) using classifier C with feature subset f. δ(F) and δ(f) denote the numbers of features in the original dataset and subset, respectively. The fewer features in the subset, the higher the fitness value. α is a weight between [0, 1], which is adjustable based on the importance of δ(f).
Given the dataset’s imbalance, SCV ensures consistent class proportions in training and validation sets [62]. The fitness function uses MCC for evaluation, as it handles class imbalance better than accuracy and extends to multi-class problems through derivation, as shown in Equation (12) [63]:
MCC = c o v ( X ,   Y ) c o v ( X , X ) c o v ( Y ,   Y )
The specific calculation method for the underlying correlation coefficients is as follows (Equation (13)):
c o v ( X , Y ) = k = 1 N w k   c o v ( X k ,   Y k ) = 1 N s = 1 S k = 1 N ( X s k X ¯ k ) ( Y s k Y ¯ k )
Let N denote the number of classes and s denote a single data instance. Xsn is a binary variable indicating the true class label of s, and Y s n indicates the predicted class label. X ¯ k and Y ¯ k represent the proportions of the true and predicted classes k in the dataset.
The MCC value ranges from −1 to 1, with 1 indicating perfect prediction, 0 indicating random prediction, and −1 indicating complete disagreement between predictions and observations. The feature proportion in the fitness function ranges from 0 to 1. To align their numerical ranges, MCC is normalized [64]. After shifting and scaling, MCC values fall within 0 to 1.

3.2.3. Updating Steps

Aligned with the integer encoding approach, the basic SSO updating mechanism is employed. Initially, a set of solutions is randomly generated from the MI candidate solutions, and gbest and pbest are recorded based on the fitness function. During the updating process, each variable of the solution is updated according to the step function (Equation (9)). After updating, the fitness function is computed, and gbest and pbest are updated. This cycle iterates until convergence. The update process is illustrated in the flowchart as shown in below Figure 3.

4. Experimental Results and Analysis

This section introduces the data used in the experiments (Section 4.1), explains the proposed feature selection method and hyperparameter configuration (Section 4.2), presents the experimental results from MI-SSO and compares its performance with other methods (Section 4.3), and details the application of MI-SSO in semiconductor anomaly detection (Section 4.4).

4.1. Description of the Datasets

Due to the confidential nature of semiconductor manufacturing, obtaining related datasets is challenging. Therefore, this study uses publicly available datasets for validating feature selection. To approximate wafer anomaly classification, selected datasets must have more features than data instances and at least half must be multi-class. The following five publicly available datasets [18,42,43,65,66] were chosen for experimentation as shown in Table 1:
For wafer anomaly classification, this study used data from a leading wafer foundry in Taiwan. After preprocessing, the dataset included 426 wafer batches with measurements for 672 test items, with features outnumbering instances by approximately 1.57 times. Domain experts categorized the wafers into three classes, normal, risk, and anomaly, with proportions of 380:23:14, respectively.

4.2. MI-SSO Parameter Configuration

The parameters pre-configured in this study include: the number of candidate solutions K selected by the MI-based feature selection in the first stage, SSO hyperparameters (Cg, Cp, and Cw) in the second stage, the SSO solution length Nvar, the optimal α in the fitness function, and the classification algorithm for quality verification.
To determine the optimal parameter combination for MI-SSO, we conducted experiments using six publicly available datasets to assess various parameter settings’ impacts on the prediction results. Given the longer computation times of heuristic algorithms, small-sample experiments were employed. Each parameter combination was tested 10 times, recording the Matthews Correlation Coefficient (MCC) and the number of selected features (#F). The average MCC and #F values after 10 repetitions evaluated classification performance.
Compared to other parameters, the choice of classifier significantly impacts the quality of classification results. Therefore, we first compared the effects of using three different algorithms, namely KNN, SVM, and RF, as classifiers on the feature selection results. In this experiment, the hyperparameters of SSO (Cg, Cp, and Cw) were set to default values (0.4, 0.7, and 0.9), and the α parameter in the fitness function was set to 0.8 according to reference [60]. The number of candidate solutions K was set to 100, following the recommendation in Ref. [67]. The iteration number (Ngen) of SSO was set to 100, the number of solutions (Nsol) was set to 50, and k in SCV was set to four. All classifiers were retrained using grid search to search for model hyperparameters based on different datasets. According to the experimental results (Table 2) and considering the classification results across all six datasets, SVM achieved the highest classification accuracy, selected the fewest features, and required the lowest computational time. Therefore, in all subsequent experiments, SVM was chosen as the classifier for the MI-SSO method.
The classifier choice significantly impacts classification results. We compared KNN, SVM, and RF as classifiers for feature selection. SSO hyperparameters (Cg, Cp, and Cw) were set to default values (0.4, 0.7, and 0.9), and α in the fitness function was set to 0.8 [60]. The number of candidate solutions K was set to 100 [67], the generation number Ngen to 100, the number of solutions (Nsol) to 50, and k in SCV to four. All classifiers were retrained using grid search for hyperparameters. The experimental results (Table 2) showed that SVM achieved the highest accuracy, selected the fewest features, and required the lowest computational time. Therefore, SVM was chosen as the classifier for MI-SSO in all subsequent experiments.
After selecting the classifier, the next step involved finding the optimal combination of SSO hyperparameters through experimentation. The experimental design for hyperparameter combinations used the I9 orthogonal array from the Taguchi method, reducing the 27 sets of experiments with three levels and three factors to 9 sets. However, one combination did not meet the prerequisite condition CgCpCw, resulting in eight different hyperparameter combinations.
Keeping other experimental conditions constant, the results (Table 3) reveal that although the combinations with the highest MCC or the fewest features vary across the six datasets, with the better-performing combinations showing similar performance. The difference in MCC is less than 0.05, and the variance in the number of selected features is less than one. Therefore, to determine the optimal SSO hyperparameter combination, the results from the six datasets were collectively considered, and the combination with the highest average fitness function value was chosen: (Cg, Cp, Cw) = (0.4, 0.7, 0.9).
After determining the classifier and SSO hyperparameters, only the number of candidate solutions K, the solution length Nvar, and the hyperparameter α of the fitness function remain undecided. According to the experimental results of Dabba et al., when using MI as the first-stage feature selection method in microarray problems, K is set to 100 [67]. Both Nvar and α affect the search direction and convergence speed of SSO. Given that classification accuracy is more important than the number of features, experiments were designed accordingly. Keeping other settings constant, the experimental results are shown in Table 4. Although MCC values under different combinations are close, the number of features varies significantly. This indicates that when using the MI-SSO method on different datasets, it is necessary to repeat the experiment to find the optimal feature combination rather than using fixed Nvar and α combinations with the highest fitness function values chosen based on different datasets during experimentation.
Summarizing the numerous small-sample experiments conducted using publicly available datasets, it was found that MI-SSO, when employing SVM as the classifier and setting the parameters (Cg, Cp, Cw, and K) to (0.4, 0.7, 0.9, and 100), achieved higher classification accuracy with fewer features across different classification problems. However, the settings of Nvar and α showed significant variations in feature selection results due to differences in data characteristics. Therefore, to achieve higher classification accuracy and fewer features, it is necessary to redesign experiments based on the specific characteristics of each dataset to find the optimal parameter combination.

4.3. Experimental Results

The encoding used in this study was implemented using Python 3.10.5 in Visual Studio Code. The experiments were conducted on a device equipped with an AMD Ryzen 5 5600G with Radeon Graphics processor running at 3.90 GHz and 16 GB of RAM. All datasets used in the experiments were preprocessed by imputing missing values with the mean and performing min–max normalization. Additionally, for each dataset, a grid search was employed to find the optimal hyperparameter combination for SVM.
According to the experimental results in Section 4.2, the experimental settings are as following Table 5:
In addition to MI-SSO, this study also conducted experiments using three other methods—MI, MI-GA, and MI-PSO—to compare them with the proposed method as follows:
  • MI—This method uses only MI for feature selection. It sequentially selects the top K features based on their MI values (K = 1, 2, …) and evaluates their classification performance by incorporating them into the model. The process continues until there is no improvement in classification results for 100 consecutive solutions. This identifies the top K features that yield the best model performance.
  • MI-GA—This method adapts SSO using a genetic algorithm (GA), with crossover and mutation probabilities set to 0.8 and 0.2, respectively [68].
  • MI-PSO—This method adapts SSO using a particle swarm optimization (PSO) algorithm, with the inertia weight w set to 0.9 and the acceleration constants (c1, c2) set to (2, 2) [49].
The six public datasets were repeatedly tested using the specified parameters for 30 trials. The iteration number, number of solutions, and solution length for GA and PSO were the same as for SSO. The average performance of the four algorithms is shown in Table 6.
MI-SSO achieved the highest MCC in five out of six datasets, indicating superior classification accuracy. However, for the Lung dataset, MI-SSO did not perform as well as MI, likely due to selecting too few features. Regarding the number of features, MI-SSO selected the fewest features only in the Ovarian dataset. In other datasets, MI-SSO selected relatively fewer features but not the fewest, showing a tendency to sacrifice some features for better accuracy, aligning with α > 0.5 in the design.
In terms of runtime as shown in Table 7, MI had the shortest runtime, averaging 3 to 4 min depending on the dataset size. Among MI-GA, MI-PSO, and MI-SSO, MI-PSO was the fastest, followed by MI-GA and MI-SSO, with less than 8 s of difference among them.
In the field of gene expression microarrays, numerous published papers explore the application of different feature selection methods. This study filtered five feature selection methods for comparison with MI-SSO based on the following criteria: published within the last four years, frequently cited, and tested using the same public datasets. The methods are MIM-mMFA [67], VS-CCPSO [68], MTPSO [69], TOPSIS-Jaya [70], and SARA-SVM [71]. Since accuracy is the primary evaluation criterion in gene expression microarrays, these five methods also use accuracy as the main benchmark and objective function. Therefore, to compare with other methods, the fitness function in MI-SSO was modified as follows:
M a x   Fitness ( f ) = α   Accuracy S C V = k ( C f ) + 1 2 + ( 1 α ) δ ( F ) δ ( f ) δ ( F )
With the remaining experimental parameters unchanged, the experiment was repeated 30 times, and the results were recorded (Table 8). In terms of accuracy, MI-SSO achieved the best classification performance in two datasets (Breast and Ovarian). For the number of features, MI-SSO selected the fewest features in the Breast and Lung datasets. MIM-mMFA obtained the highest accuracy in three datasets (Brain2, Colon, and MLL), while SARA-SVM selected the fewest features in two datasets (Colon and Ovarian). MI-SSO balances maximum classification accuracy with the fewest features, and although it may not always outperform other techniques solely in terms of accuracy or feature count, it demonstrates competitiveness across the six gene expression microarray datasets and outperforms existing techniques in certain scenarios.

4.4. Case Verification

After validating MI-SSO with publicly available data, this study applied MI-SSO to anomaly classification in semiconductor manufacturing. The data come from machine measurements and category labels annotated by engineers. The original dataset includes the following:
  • Lot number—unique batch identifier;
  • Wafer number—sequential number up to 25 wafers per batch;
  • Parameter name—corresponding measurement parameters for each test;
  • Measurement equipment—recorded equipment performing the test.
  • Measurement points 1 to 5—floating-point numbers representing different characteristics at different positions on the same wafer;
  • Label—category indicating wafer quality as good, bad, or risk.
Labels are assigned per batch as illustrated in Table 9, and so data from each batch (about 25 wafers) are aggregated into a single entry. The maximum and minimum values are selected based on measurement parameters and points. This results in a dataset with 426 entries and 672 features.
In the case study, MI-SSO used SVM as the classifier and SSO hyperparameters (Cg, Cp, and Cw) of (0.4, 0.7, and 0.9), with 100 candidate solutions (K). A small-sample experiment determined (Nvar, α) = (30, 0.9) for the dataset (Table 10).
After preprocessing (mean imputation, min–max normalization, and SVM hyperparameter tuning), feature selection and anomaly classification were performed using the MI-SSO, MI, MI-GA, and MI-PSO algorithms.
The results (Table 11) show that MI-SSO achieves the highest accuracy and fewest features for semiconductor anomaly classification. Although MI-SSO selects one more feature on average than PSO, it improves MCC by 0.02. MI-SSO outperforms GA and PSO-based methods and is more effective than MI alone.

5. Conclusions

This study addresses the unique problem of wafer anomaly detection with a hybrid feature selection method combining mutual information (MI) and simplified swarm optimization (SSO). MI-SSO operates in two stages: MI filters out less relevant features, and SSO selects the most important subset from the reduced feature space, achieving precise classification with fewer features.
Experimental comparisons with MI, MI-GA, and MI-PSO show that MI-SSO achieves the highest classification performance with fewer features. SSO generally outperforms GA and PSO in convergence speed and consistently produces the best classification models with the smallest feature subsets.
MI-SSO not only enhances classification accuracy but also improves interpretability, helping us to understand the importance of certain features. For semiconductor manufacturing, MI-SSO helps intercept defective products, reduce processing costs, and detect hidden manufacturing problems early. The selected features provide valuable insights for engineers to optimize the measurement and manufacturing processes.

Author Contributions

Conceptualization, W.-C.Y. and C.-L.C.; methodology, W.-C.Y. and C.-L.C.; software, W.-C.Y. and C.-L.C.; validation, W.-C.Y.; formal analysis, W.-C.Y. and C.-L.C.; investigation, W.-C.Y.; resources, W.-C.Y. and C.-L.C.; data curation, W.-C.Y. and C.-L.C.; writing—original draft preparation, W.-C.Y. and C.-L.C.; writing—review and editing, W.-C.Y. and C.-L.C.; visualization, W.-C.Y.; supervision, W.-C.Y.; project administration, W.-C.Y.; funding acquisition, W.-C.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science Council of Taiwan, R.O.C., under grant numbers MOST 110-2221-E-007-107-MY3.

Data Availability Statement

The data presented in this study are available in this article as the description in Section 4.1.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Moore, G.E. Cramming more components onto integrated circuits. Proc. IEEE 1998, 86, 82–85. [Google Scholar] [CrossRef]
  2. Mack, C.A. Fifty years of Moore’s law. IEEE Trans. Semicond. Manuf. 2011, 24, 202–207. [Google Scholar] [CrossRef]
  3. Kikuchi, M. Semiconductor Fabrication Facilities: Equipment, Materials, Processes, and Prescriptions for Industrial Revitalization; Shimao: Taipei, Taiwan, 2016. [Google Scholar]
  4. Kourti, T.; MacGregor, J.F. Process analysis, monitoring and diagnosis, using multivariate projection methods. Chemom. Intell. Lab. Syst. 1995, 28, 3–21. [Google Scholar] [CrossRef]
  5. Baly, R.; Hajj, H. Wafer classification using support vector machines. IEEE Trans. Semicond. Manuf. 2012, 25, 373–383. [Google Scholar] [CrossRef]
  6. He, Q.P.; Wang, J. Fault detection using the k-nearest neighbor rule for semiconductor manufacturing processes. IEEE Trans. Semicond. Manuf. 2007, 20, 345–354. [Google Scholar] [CrossRef]
  7. Piao, M.; Jin, C.H.; Lee, J.Y.; Byun, J.Y. Decision tree ensemble-based wafer map failure pattern recognition based on radon transform-based features. IEEE Trans. Semicond. Manuf. 2018, 31, 250–257. [Google Scholar] [CrossRef]
  8. Shin, C.K.; Park, S.C. A machine learning approach to yield management in semiconductor manufacturing. Int. J. Prod. Res. 2000, 38, 4261–4271. [Google Scholar] [CrossRef]
  9. Cheng, K.C.C.; Chen, L.L.Y.; Li, J.W.; Li, K.S.M.; Tsai, N.C.Y.; Wang, S.J.; Huang, A.Y.-A.; Chou, L.; Lee, C.-S.; Chen, J.E.; et al. Machine learning-based detection method for wafer test induced defects. IEEE Trans. Semicond. Manuf. 2021, 34, 161–167. [Google Scholar] [CrossRef]
  10. Bolón-Canedo, V.; Sánchez-Maroño, N.; Alonso-Betanzos, A. Feature Selection for High-Dimensional Data; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  11. Venkatesh, B.; Anuradha, J. A review of feature selection and its methods. Cybern. Inf. Technol. 2019, 19, 3–26. [Google Scholar] [CrossRef]
  12. Fernández, A.; García, S.; Galar, M.; Prati, R.C.; Krawczyk, B.; Herrera, F. Learning from Imbalanced Data Sets; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  13. Kaur, H.; Pannu, H.S.; Malhi, A.K. A systematic review on imbalanced data challenges in machine learning: Applications and solutions. ACM Comput. Surv. (CSUR) 2019, 52, 1–36. [Google Scholar] [CrossRef]
  14. Jiang, D.; Lin, W.; Raghavan, N. A Gaussian mixture model clustering ensemble regressor for semiconductor manufacturing final test yield prediction. IEEE Access 2021, 9, 22253–22263. [Google Scholar] [CrossRef]
  15. Fan, S.K.S.; Lin, S.C.; Tsai, P.F. Wafer fault detection and key step identification for semiconductor manufacturing using principal component analysis, AdaBoost and decision tree. J. Ind. Prod. Eng. 2016, 33, 151–168. [Google Scholar] [CrossRef]
  16. Chien, C.F.; Wang, W.C.; Cheng, J.C. Data mining for yield enhancement in semiconductor manufacturing and an empirical study. Expert Syst. Appl. 2007, 33, 192–198. [Google Scholar] [CrossRef]
  17. Eesa, A.S.; Orman, Z.; Brifcani, A.M.A. A novel feature-selection approach based on the cuttlefish optimization algorithm for intrusion detection systems. Expert Syst. Appl. 2015, 42, 2670–2679. [Google Scholar] [CrossRef]
  18. Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature selection: A data perspective. ACM Comput. Surv. (CSUR) 2017, 50, 1–45. [Google Scholar] [CrossRef]
  19. Zebari, R.; Abdulazeez, A.; Zeebaree, D.; Zebari, D.; Saeed, J. A comprehensive review of dimensionality reduction techniques for feature selection and feature extraction. J. Appl. Sci. Technol. Trends 2020, 1, 56–70. [Google Scholar] [CrossRef]
  20. Jović, A.; Brkić, K.; Bogunović, N. A review of feature selection methods with applications. In Proceedings of the 2015 38th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 25–29 May 2015; pp. 1200–1205. [Google Scholar]
  21. Dash, M.; Liu, H. Feature selection for classification. Intell. Data Anal. 1997, 1, 131–156. [Google Scholar] [CrossRef]
  22. Kira, K.; Rendell, L.A. The feature selection problem: Traditional methods and a new algorithm. In Proceedings of the Tenth National Conference on Artificial Intelligence, San Jose, CA, USA, 12–16 July 1992; Volume 2, pp. 129–134. [Google Scholar]
  23. Kononenko, I.; Šimec, E.; Robnik-Šikonja, M. Overcoming the myopia of inductive learning algorithms with RELIEFF. Appl. Intell. 1997, 7, 39–55. [Google Scholar] [CrossRef]
  24. Kononenko, I. Estimating attributes: Analysis and extensions of RELIEF. In Proceedings of the European Conference on Machine Learning, Catania, Italy, 6–8 April 1994; pp. 171–182. [Google Scholar]
  25. Yang, H.; Moody, J. Data visualization and feature selection: New algorithms for nongaussian data. In Proceedings of the Advances in Neural Information Processing Systems, Denver, CO, USA, 29 November–4 December 1999; Volume 12. [Google Scholar]
  26. Karegowda, A.G.; Manjunath, A.; Jayaram, M. Comparative study of attribute selection using gain ratio and correlation based feature selection. Int. J. Inf. Technol. Knowl. Manag. 2010, 2, 271–277. [Google Scholar]
  27. Azhagusundari, B.; Thanamani, A.S. Feature selection based on information gain. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 2013, 2, 18–21. [Google Scholar]
  28. Alhaj, T.A.; Siraj, M.M.; Zainal, A.; Elshoush, H.T.; Elhaj, F. Feature selection using information gain for improved structural-based alert correlation. PLoS ONE 2016, 11, e0166017. [Google Scholar] [CrossRef] [PubMed]
  29. Jadhav, S.; He, H.; Jenkins, K. Information gain directed genetic algorithm wrapper feature selection for credit rating. Appl. Soft Comput. 2018, 69, 541–553. [Google Scholar] [CrossRef]
  30. Amaldi, E.; Kann, V. On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems. Theor. Comput. Sci. 1998, 209, 237–260. [Google Scholar] [CrossRef]
  31. Soufan, O.; Kleftogiannis, D.; Kalnis, P.; Bajic, V.B. DWFS: A wrapper feature selection tool based on a parallel genetic algorithm. PLoS ONE 2015, 10, e0117988. [Google Scholar] [CrossRef]
  32. Vieira, S.M.; Mendonça, L.F.; Farinha, G.J.; Sousa, J.M. Modified binary PSO for feature selection using SVM applied to mortality prediction of septic patients. Appl. Soft Comput. 2013, 13, 3494–3504. [Google Scholar] [CrossRef]
  33. Cai, J.; Luo, J.; Wang, S.; Yang, S. Feature selection in machine learning: A new perspective. Neurocomputing 2018, 300, 70–79. [Google Scholar] [CrossRef]
  34. Guyon, I.; Weston, J.; Barnhill, S.; Vapnik, V. Gene selection for cancer classification using support vector machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar] [CrossRef]
  35. Sarkar, S.D.; Goswami, S.; Agarwal, A.; Aktar, J. A novel feature selection technique for text classification using Naive Bayes. Int. Sch. Res. Not. 2014, 2014, 717092. [Google Scholar] [CrossRef]
  36. Bostani, H.; Sheikhan, M. Hybrid of binary gravitational search algorithm and mutual information for feature selection in intrusion detection systems. Soft Comput. 2017, 21, 2307–2324. [Google Scholar] [CrossRef]
  37. Zhang, J.; Xiong, Y.; Min, S. A new hybrid filter/wrapper algorithm for feature selection in classification. Anal. Chim. Acta 2019, 1080, 43–54. [Google Scholar] [CrossRef] [PubMed]
  38. Naqa, I.E.; Murphy, M.J. What is machine learning? In Machine Learning in Radiation Oncology; Springer: Berlin/Heidelberg, Germany, 2015; pp. 3–11. [Google Scholar]
  39. Zhou, Z.; Wen, C.; Yang, C. Fault detection using random projections and k-nearest neighbor rule for semiconductor manufacturing processes. IEEE Trans. Semicond. Manuf. 2014, 28, 70–79. [Google Scholar] [CrossRef]
  40. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar]
  41. Awad, M.; Khanna, R.; Awad, M.; Khanna, R. Support vector machines for classification. In Efficient Learning Machines: Theories, Concepts, and Applications for Engineers and System Designers; Apress: Berkeley, CA, USA, 2015; pp. 39–66. [Google Scholar]
  42. Nutt, C.L.; Mani, D.R.; Betensky, R.A.; Tamayo, P.; Cairncross, J.G.; Ladd, C.; Pohl, U.; Hartmann, C.; McLaughlin, M.E.; Batchelor, T.T.; et al. Gene expression-based classification of malignant gliomas correlates better with survival than histological classification. Cancer Res. 2003, 63, 1602–1607. [Google Scholar]
  43. Alon, U.; Barkai, N.; Notterman, D.A.; Gish, K.; Ybarra, S.; Mack, D.; Levine, A.J. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Natl. Acad. Sci. USA 1999, 96, 6745–6750. [Google Scholar] [CrossRef]
  44. Cervantes, J.; Garcia-Lamont, F.; Rodríguez-Mazahua, L.; Lopez, A. A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing 2020, 408, 189–215. [Google Scholar] [CrossRef]
  45. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  46. Breiman, L.; Friedman, J.; Olshen, R.; Stone, C.J. Classification and Regression Trees; Chapman & Hall/CRC: New York, NY, USA, 1984. [Google Scholar]
  47. Beheshti, Z.; Shamsuddin, S.M.H. A review of population-based meta-heuristic algorithms. Int. J. Adv. Soft Comput. Appl 2013, 5, 1–35. [Google Scholar]
  48. Yeh, W.C. A two-stage discrete particle swarm optimization for the problem of multiple multi-level redundancy allocation in series systems. Expert Syst. Appl. 2009, 36, 9192–9200. [Google Scholar] [CrossRef]
  49. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  50. Yeh, W.C.; Chuang, M.C.; Lee, W.C. Uniform parallel machine scheduling with resource consumption constraint. Appl. Math. Model. 2015, 39, 2131–2138. [Google Scholar] [CrossRef]
  51. Yeh, W.C.; Wei, S.C. Economic-based resource allocation for reliable Grid-computing service based on Grid Bank. Future Gener. Comput. Syst. 2012, 28, 989–1002. [Google Scholar] [CrossRef]
  52. Lee, W.C.; Chuang, M.C.; Yeh, W.C. Uniform parallel-machine scheduling to minimize makespan with position-based learning curves. Comput. Ind. Eng. 2012, 63, 813–818. [Google Scholar] [CrossRef]
  53. Corley, H.W.; Rosenberger, J.; Yeh, W.C.; Sung, T.K. The cosine simplex algorithm. Int. J. Adv. Manuf. Technol. 2006, 27, 1047–1050. [Google Scholar] [CrossRef]
  54. Yeh, W.C. A new algorithm for generating minimal cut sets in k-out-of-n networks. Reliab. Eng. Syst. Safety 2006, 91, 36–43. [Google Scholar] [CrossRef]
  55. Luo, C.; Sun, B.; Yang, K.; Lu, T.; Yeh, W.C. Thermal infrared and visible sequences fusion tracking based on a hybrid tracking framework with adaptive weighting scheme. Infrared Phys. Technol. 2019, 99, 265–276. [Google Scholar] [CrossRef]
  56. Bae, C.; Yeh, W.C.; Wahid, N.; Chung, Y.Y.; Liu, Y. A New Simplified Swarm Optimization (SSO) Using Exchange Local Search Scheme. Int. J. Innov. Comput. Inf. Control 2012, 8, 4391–4406. [Google Scholar]
  57. Yeh, W.C. A new exact solution algorithm for a novel generalized redundancy allocation problem. Inf. Sci. 2017, 408, 182–197. [Google Scholar] [CrossRef]
  58. Hsieh, T.J.; Yeh, W.C. Knowledge discovery employing grid scheme least squares support vector machines based on orthogonal design bee colony algorithm. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 2011, 41, 1198–1212. [Google Scholar] [CrossRef] [PubMed]
  59. Chung, Y.Y.; Wahid, N. A hybrid network intrusion detection system using simplified swarm optimization (SSO). Appl. Soft Comput. 2012, 12, 3014–3022. [Google Scholar] [CrossRef]
  60. Lai, C.M.; Yeh, W.C.; Chang, C.Y. Gene Selection using Information Gain and Improved Simplified Swarm Optimization. Neurocomputing 2016, 218, 331–338. [Google Scholar] [CrossRef]
  61. Song, X.F.; Zhang, Y.; Guo, Y.N.; Sun, X.Y.; Wang, L. Variable-size cooperative coevolutionary particle swarm optimization for feature selection on high-dimensional data. IEEE Trans. Evol. Comput. 2020, 24, 882–895. [Google Scholar] [CrossRef]
  62. Kohavi, R. A study of cross-validation and bootstrap for accuracy estimation and model selection. Int. Jt. Conf. Arti 1995, 14, 1137–1145. [Google Scholar]
  63. Gorodkin, J. Comparing two K-category assignments by a K-category correlation coefficient. Comput. Biol. Chem. 2004, 28, 367–374. [Google Scholar] [CrossRef] [PubMed]
  64. Chicco, D.; Jurman, G. A statistical comparison between Matthews correlation coefficient (MCC), prevalence threshold, and Fowlkes–Mallows index. J. Biomed. Inform. 2023, 144, 104426. [Google Scholar] [CrossRef]
  65. Zhu, Z.; Ong, Y.S.; Dash, M. Markov blanket-embedded genetic algorithm for gene selection. Pattern Recognit. 2007, 40, 3236–3248. [Google Scholar] [CrossRef]
  66. Petricoin, E.F.; Ardekani, A.M.; Hitt, B.A.; Levine, P.J.; Fusaro, V.A.; Steinberg, S.M.; Mills, G.B.; Simone, C.; A Fishman, D.; Kohn, E.C.; et al. Use of proteomic patterns in serum to identify ovarian cancer. Lancet 2002, 359, 572–577. [Google Scholar] [CrossRef] [PubMed]
  67. Dabba, A.; Tari, A.; Meftali, S.; Mokhtari, R. Gene selection and classification of microarray data method based on mutual information and moth flame algorithm. Expert Syst. Appl. 2021, 166, 114012. [Google Scholar] [CrossRef]
  68. Heris, M.K. Practical Genetic Algorithms in Python and MATLAB—Video Tutorial. Yarpiz. 2020. Available online: https://yarpiz.com/632/ypga191215-practical-genetic-algorithms-in-python-and-matlab (accessed on 10 January 2024).
  69. Chen, K.; Xue, B.; Zhang, M.; Zhou, F. Evolutionary multitasking for feature selection in high-dimensional classification via particle swarm optimization. IEEE Trans. Evol. Comput. 2021, 26, 446–460. [Google Scholar] [CrossRef]
  70. Chaudhuri, A.; Sahu, T.P. A hybrid feature selection method based on Binary Jaya algorithm for micro-array data classification. Comput. Electr. Eng. 2021, 90, 106963. [Google Scholar] [CrossRef]
  71. Baliarsingh, S.K.; Muhammad, K.; Bakshi, S. SARA: A memetic algorithm for high-dimensional biomedical data. Appl. Soft Comput. 2021, 101, 107009. [Google Scholar] [CrossRef]
Figure 1. Decision tree structure.
Figure 1. Decision tree structure.
Electronics 13 02242 g001
Figure 2. Particle encoding method for a simple case.
Figure 2. Particle encoding method for a simple case.
Electronics 13 02242 g002
Figure 3. Flowchart of SSO updating steps.
Figure 3. Flowchart of SSO updating steps.
Electronics 13 02242 g003
Table 1. Basic description of the datasets.
Table 1. Basic description of the datasets.
Dataset NameNumber of FeaturesNumber of InstancesNumbers of Classes and ProportionsSource
Brain210,367504 (14:7:14:15)[42]
Breast24,481972 (51:46)[65]
Colon2000602 (40:22)[43]
Lung33122035 (139:17:21:20:6)[18]
MLL12,582723 (24:20:28)[65]
Ovarian15,1542532 (162:91)[66]
Table 2. Comparison of different classifiers (average).
Table 2. Comparison of different classifiers (average).
Dataset KNNSVMRF
Avg.MCC0.7425440.7585840.757617
#F21.55238120.62857121.666667
Time (min)6.6661595.78509811.083760
Table 3. Comparison of classification performance with different combinations of SSO hyperparameters (average).
Table 3. Comparison of classification performance with different combinations of SSO hyperparameters (average).
DatasetCg0.40.40.40.50.50.50.60.6
Cp0.50.60.70.50.60.70.60.7
Cw0.70.80.90.80.90.70.70.8
Brain2MCC0.87960.90250.89370.87950.89540.88980.89130.8887
#F29.026.823.127.423.226.927.524.5
BreastMCC0.70240.79030.82780.81290.83370.83300.77870.7886
#F30.229.225.629.825.429.729.128.2
ColonMCC0.86820.86610.88250.85970.86230.86740.87150.8763
#F27.627.528.127.527.627.327.428.0
LungMCC0.84710.88570.91200.88460.91130.84410.85380.8921
#F32.232.133.132.731.130.432.832.8
MLLMCC0.99200.99520.99330.99070.99390.99320.99240.9920
#F23.520.715.720.415.821.922.619.9
OvarianMCC0.99790.99820.99870.99770.99770.99810.99690.9983
#F21.519.614.419.614.620.119.417.3
Avg.MCC0.88120.90630.91800.90420.91570.90430.89740.9060
#F27.333325.983323.333326.233322.950026.050026.466725.1167
Fitness0.94330.95530.96080.95420.95970.95430.95100.9551
Table 4. Impacts of alpha and the solution length on the classification results (average).
Table 4. Impacts of alpha and the solution length on the classification results (average).
α 0.60.60.70.70.80.80.90.911
DatasetNvar30503050305030503050
Brain2MCC0.90050.90610.90870.90210.90570.89760.90180.90800.91150.9120
#F15.223.216.722.416.522.216.322.822.732.5
Fitness0.95270.95530.95660.95340.95510.95130.95330.95610.95790.9581
BreastMCC0.86440.8462460.8155850.8406910.83810.85240.80360.85210.77100.8522
#F1725.416.427.316.526.617.126.122.934.7
Fitness0.93550.92690.91240.92430.92310.92980.90670.92970.89120.9297
ColonMCC0.87930.87180.87560.87140.87150.87500.87540.87460.87820.8710
#F20.728.319.327.318.927.320.3282329.8
Fitness0.94220.93840.94040.93820.93850.93990.94030.93970.94160.9380
LungMCC0.90090.91930.90640.91570.90440.90610.90350.91460.90830.9135
#F23.131.82333.622.23324.133.32637.3
Fitness0.95260.96120.95520.95940.95430.95490.95380.95890.95610.9583
MLLMCC0.99290.99410.99270.99430.99160.99360.99490.99300.99580.9951
#F10.716.610.715.810.316.99.816.62131.9
Fitness0.99660.99710.99650.99720.99600.99690.99750.99660.99790.9976
OvarianMCC0.99790.99740.99750.99840.99880.99800.99870.99840.99870.9987
#F8.9158.514.68.214.88.315.121.831.2
Fitness0.99900.99870.99880.99920.99940.99900.99940.99920.99930.9993
Table 5. The experimental settings.
Table 5. The experimental settings.
HyperparametersNumerical ValueIllustrate
K100Number of candidate solutions
Ngen100Number of iterations
Nsol50Number of solutions
Nvar30 or 50Solution length, depending on the dataset
Cp0.4SSO hyperparameter
Cg0.7SSO hyperparameter
Cw0.9SSO hyperparameter
α 0.6, 0.7, 0.8, 0.9 or 1Fitness function hyperparameters, depending on the dataset
k4Number of layers in SCV
Table 6. Comparison of public datasets.
Table 6. Comparison of public datasets.
DatasetAverageMIMI-GAMI-PSOMI-SSO
Brain2MCC0.9000860.8535290.8573730.903063
#F9439.928.531.133333
BreastMCC0.8336060.7204170.5352840.848164
#F9025.93333314.46666716.333333
ColonMCC0.677740.760190.866270.930007
#F1826.120.36666719.633333
LungMCC0.9530740.8741270.4524020.918722
#F11739.73333317.76666733.5
MLLMCC0.9844380.9721860.9933980.994614
#F426.43333316.46666713.666667
OvarianMCC0.9992890.9922630.9981020.99863
#F5325.93333312.88.633333
Table 7. Algorithm running time (unit: minutes).
Table 7. Algorithm running time (unit: minutes).
DatasetMI-GAMI-PSOMI-SSO
SEMI6.5808496.9048976.644581
Brain23.2417533.2095233.228779
Breast4.8747454.4944044.755814
Colon5.4307435.180675.423277
Lung3.1150693.1803723.13715
MLL4.2603294.0807484.283061
Ovarian4.922044.4719514.887172
Average4.6322184.5032244.622833
Table 8. Comparison of MI-SSO with other algorithms.
Table 8. Comparison of MI-SSO with other algorithms.
DatasetAverageMIM-mMFAVS-CCPSOMTPSOTOPSIS-JayaSARA-SVMMI-SSO
Brain2ACC10.80470.8540 0.921967
#F11.9381.461066.32 31.3
BreastACC0.868 0.924472
#F25.9 16.7
ColonACC1 0.97760.97020.952861
#F26.3 18.9919.766667
LungACC 0.97910.9740 0.956916
#F 370.79343.24 32.6
MLLACC1 0.9962 0.996235
#F33 12.9 13.366667
OvarianACC0.9818 0.99520.99150.999297
#F35.9 18.568.7
Table 9. Illustration of the case dataset.
Table 9. Illustration of the case dataset.
Batch Number p 1 m 1
Max
p 1 m 1
Min
p 1 m 2
Max
p i m j
Max/Min …
p 135 m 5
Max
p 135 m 5
Min
Mark
A4.006363.884180.6690974.990954.98854good
B3.969263.871550.6119474.99464.99371good
C4.121333.889670.6119474.989234.98752bad
Table 10. The influences of different alpha values and solution lengths on the classification results for the semiconductor manufacturing dataset.
Table 10. The influences of different alpha values and solution lengths on the classification results for the semiconductor manufacturing dataset.
α 0.60.60.70.70.80.80.90.911
DatasetNvar30503050305030503050
SEMIMCC0.95580.96040.96160.96100.96130.96390.96340.96070.96310.9562
#F21.329.523.331.923.532.622.535.225.938.7
Fitness0.97600.97540.97770.97490.97750.97590.97870.97380.97750.9710
Table 11. Comparison of the effectiveness of semiconductor manufacturing datasets.
Table 11. Comparison of the effectiveness of semiconductor manufacturing datasets.
DatasetAverageMIMI-GAMI-PSOMI-SSO
SEMIACC0.9931770.9817260.9907510.994009
MCC0.9573880.8825050.9419810.962464
#F3525.922.6423.5
Fitness0.9725390.9452940.9700540.977992
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yeh, W.-C.; Chu, C.-L. Feature Selection for Data Classification in the Semiconductor Industry by a Hybrid of Simplified Swarm Optimization. Electronics 2024, 13, 2242. https://doi.org/10.3390/electronics13122242

AMA Style

Yeh W-C, Chu C-L. Feature Selection for Data Classification in the Semiconductor Industry by a Hybrid of Simplified Swarm Optimization. Electronics. 2024; 13(12):2242. https://doi.org/10.3390/electronics13122242

Chicago/Turabian Style

Yeh, Wei-Chang, and Chia-Li Chu. 2024. "Feature Selection for Data Classification in the Semiconductor Industry by a Hybrid of Simplified Swarm Optimization" Electronics 13, no. 12: 2242. https://doi.org/10.3390/electronics13122242

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop