Next Article in Journal
Self-Sovereign Identity: A Systematic Review, Mapping and Taxonomy
Next Article in Special Issue
UAV and IoT-Based Systems for the Monitoring of Industrial Facilities Using Digital Twins: Methodology, Reliability Models, and Application
Previous Article in Journal
Design of Under-Actuated Soft Adhesion Actuators for Climbing Robots
Previous Article in Special Issue
FDNet: Knowledge and Data Fusion-Driven Deep Neural Network for Coal Burst Prediction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Discrete Missing Data Imputation Using Multilayer Perceptron and Momentum Gradient Descent †

1
School of Computer Science, Hubei University of Technology, Wuhan 430068, China
2
Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou 362000, China
3
Key Laboratory of Intelligent Computing and Information Processing, Fujian Province, Quanzhou 362000, China
4
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
This paper is an extension version of the conference paper: Yan, C.; Yuan, J.; Ye, Z.; Yang, Z. A Discrete Missing Data Imputation Method Based on Improved Multi-layer Perceptron. In Proceedings of the 11th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS) 2021, Krakow, Poland, 22–25 September 2021.
Sensors 2022, 22(15), 5645; https://doi.org/10.3390/s22155645
Submission received: 20 June 2022 / Revised: 21 July 2022 / Accepted: 26 July 2022 / Published: 28 July 2022

Abstract

:
Data are a strategic resource for industrial production, and an efficient data-mining process will increase productivity. However, there exist many missing values in data collected in real life due to various problems. Because the missing data may reduce productivity, missing value imputation is an important research topic in data mining. At present, most studies mainly focus on imputation methods for continuous missing data, while a few concentrate on discrete missing data. In this paper, a discrete missing value imputation method based on a multilayer perceptron (MLP) is proposed, which employs a momentum gradient descent algorithm, and some prefilling strategies are utilized to improve the convergence speed of the MLP. To verify the effectiveness of the method, experiments are conducted to compare the classification accuracy with eight common imputation methods, such as the mode, random, hot-deck, KNN, autoencoder, and MLP, under different missing mechanisms and missing proportions. Experimental results verify that the improved MLP model (IMLP) can effectively impute discrete missing values in most situations under three missing patterns.

1. Introduction

With the advent of the data age, scientific and engineering practices generate explosively growing, widely available, and large volumes of data. These data contain many latent laws of development of various industries that are attracting more and more attention from academia and industry [1]. Over the past few decades, many enterprises have taken advantage of data to assist in making production decisions, achieving great benefits [2]. With the advancement of global informatization, analyzing and utilizing data play a significant role in promoting social development. However, there are some problems in the data-collection process, such as functional limitations, failures of equipment, incorrect data, temporary modification, and lack of responses to surveys [3], which generate considerable amounts of missing values in the final datasets obtained. These incomplete, unprocessed datasets may affect data analysis results, reduce data utilization, and even lead to wrong decisions [4]. Therefore, manipulating missing values in datasets is critical to improving the benefits of mining and using data, which is of great research significance.
Because of the importance of dealing with missing values in the object dataset, scholars have investigated many solutions. Generally, deleting the tuples with missing values is one of the most straightforward and commonest methods [5]. However, its performance is extremely unsatisfactory when the percentage of missing values per attribute varies considerably. Moreover, removing the tuples may make the remaining attribute values in the entire dataset less useful, as the deleted tuples may be crucial for the task at hand. Studies have shown that after removing missing data, good performance can be maintained only if the missing proportion is less than 10% or 15% [6]. To solve the mentioned problems with deleting missing values, another solution for this issue is to impute the missing values using statistical or machine learning techniques. Statistics study the collection, analysis, interpretation or explanation, and presentation of data, and statistical techniques model missing data values to fulfill data imputation. Initially, statistical techniques were utilized to find out the latent laws within the complete data to impute missing values, including the mean (mode), random, hot-deck [7], etc. Taking the mode as an example, a measure of central tendency for the attribute is applied to fill in the missing values. Due to the ease of implementation of statistical techniques, they have been successfully used in various fields and remain one of the major solutions for missing values imputation [8]. However, statistical imputation methods may downgrade in performance when the observed values are not close to the actual estimate of the missing value [9]. Machine learning techniques enable computer programs to automatically learn to recognize complex patterns and make intelligent decisions based on data. In this regard, imputation methods based on machine learning techniques develop models using different observed attributes to effectively fill in the missing values of the unobserved attributes [10]. Compared with the statistics-based imputation methods, machine learning techniques make missing value imputation more precise and accurate by selecting the most similar patterns or features to the actual estimates of the missing values. Consequently, machine learning techniques play the same role as statistical techniques in reconstructing a complete dataset [11].
Machine learning techniques have formed the bases of a series of efficient frameworks for data processing over the past few decades, such as the k-nearest neighbors (KNN) [12,13], artificial neural network (ANN) [14], support vector machine (SVM) [15], autoencoder (AE) [16], and multilayer perceptron (MLP) methods [17], some of which have also been applied in missing value imputation. For instance, Sanjar et al. developed a prediction model for house prices with correlations between features utilizing the KNN algorithm. It outperformed the mentioned baseline in [13], but calculating the similarity based on KNN cost computational overhead, and the effect of the missing ratio on imputation performance was ignored. To explore the effect of the missing rate on imputation performance, Lin et al. applied deep belief networks with a feature extraction strategy for missing values imputation, conducting experiments with different missing proportions ranging from 1% to 15% [18]. Wang et al. developed a transfer learning model with an additive least squares SVM to improve the classification performance for incomplete datasets, with which experiments were conducted using different missing proportions from 10% to 60% [15]. It has been testified that such approaches have stable performance with various missing proportions, even if the missing ratio is over 50%. Moreover, AE can learn to represent the incomplete data and infer alternative data for the missing value [16]. Pereira et al. summarized and discussed the successful cases of using autoencoders in missing data imputation during the last decade [19], which showed that denoising autoencoders provided a suitable choice for cases of missing values. Yoon et al. introduced the GAN framework to deal with missing values, and their proposed method significantly exceeded some state-of-the-art imputation methods [20]. Nonetheless, all the above research principally focused on continuous missing data, and the final imputation performance is affected by the data size. MLP is another representative deep learning framework for missing data imputation [21], and it has shown its superiority in cases of large-scale of data or unstructured data. For instance, Gad et al. and Cheng et al. took advantage of MLP as a framework for handling climate and medical data [22,23], respectively. Because of its simplicity and efficiency, MLP has been investigated further to improve the performance of missing value imputation in recent years. Jung et al. introduced a novel missing value imputation scheme utilizing a bagging ensemble of multilayer perceptrons, which provided superior performance in the case of electricity consumption data [24]. Śmieja et al. proposed a variation of origin MLP for several incomplete datasets, with missing ratios from 0.25% to 23.8%, which verified that the MLP imputation method was widely applicable in distinctive domains [25]. In [26], experiments testified that the MLP method was optimal for numerical and mixed datasets in terms of classification accuracy, while the classification and regression tree (CART) method performed well for categorical datasets.
Though imputing the missing data based on an MLP has desirable performance in specific applications, there are still several challenges. First, considerable training time and computational cost are required to maintain the imputation performance, which might reduce its practicality to some extent. Second, real-world datasets are composed of continuous and discrete data in most cases, namely, there are both continuous and discrete missing values. However, many studies only paid attention to continuous data, while little research concerned discrete missing data [27,28]. Although some mentioned methods could theoretically handle discrete missing values with some definite manipulations, they were always slow or inefficient for object problems. Finally, there are actually three missing mechanisms according to the occurrence of missing values, but most of the previous research has studied only one missing mechanism (more details will be introduced in the next section). Therefore, it is essential to improve the standard MLP method to overcome the existing problems and propose a novel imputation scheme based on an improved MLP technique for discrete missing data. The main contributions of this paper are as follows:
  • An imputation scheme for discrete missing data based on a multilayer perceptron with the gradient descent algorithm and prefilling strategy is proposed.
  • A performance evaluation is conducted on seven real-world datasets for three missing mechanisms compared with eight classical methods, which was mainly done for one missing mechanism in previous work. Several levels of noise are artificially added to simulate missing proportions according to the definition of missing mechanisms, and the effect of missing proportions is explored.
The paper focuses on developing an efficient imputation method for filling in incomplete datasets, especially datasets with discrete missing values. As a result, an imputation method based on an improved multilayer perceptron with the momentum gradient technique [29] is proposed to overcome the insufficiency of the basic MLP method [30], which combines different prefilling strategies for specific missing patterns. To verify the effectiveness of this method, comparisons are conducted on various statistical imputation methods and machine learning imputation methods under different missing patterns and missing proportions.
The rest of the paper is structured as follows. Section 2 introduces the fundamental conceptions of missing data, representative imputation methods, and the MLP framework. The improved MLP imputation scheme for discrete missing data is concretely illustrated in Section 3. Section 4 describes the experimental settings and presents the results of the experiments, analyzing the latent laws within the results. Lastly, Section 5 summarizes the conclusions and provides avenues for future work regarding discrete missing data imputation theory.

2. Materials and Methodology

Data preprocessing plays a significant role in data mining and analysis, typically including normalizing data, removing noise, dealing with missing values, etc. Because missing data generally downgrade the efficiency of data mining and analysis, much recent research concentrates on handling them to improve the quality of data based on statistical techniques, machine learning techniques, ensemble methods, etc. For instance, Emmanuel et al. discussed and summarized the classical techniques for missing data imputation [31], proposing and evaluating two methods using a missing rate of 5% to 20%. The results certified that KNN can successfully deal with missing data. However, their major experimental object was continuous data, providing insufficient comparisons with more missing rates and compared imputation methods. Unsupervised machine learning techniques are another typical way to handle missing data. Raja et al. conducted experiments on the Dermatology; Pima; Wisconsin, United States; and Yeast datasets, utilizing an improved fuzzy C-means algorithm to enhance the utilization of information [32]. Li et al. combined fuzzy C-Means and a vaguely quantified rough set to detect reasonable clustering for missing data [3]. Nevertheless, its computational cost is still a challenge for application in practice. Machine learning techniques with neural networks have increasingly shown their superior ability to handle sizeable data. To reduce the influence of missing values on prediction performance, Lim et al. developed an LSTM-based time series model for predicting future values of liquid cargo traffic with other evaluation indexes, finding that the proposed model improved the prediction performance [33]. Zhang et al. adopted an end-to-end GAN framework to handle multivariate time series data, which was combined with an encoder network to improve the prediction performance [34]. Li et al. aimed to explore random missing and continuous missing situations for dam SHM systems, proposing a combination of deep learning and transfer learning to improve the generalization of the missing data imputation scheme [35]. Many successful applications of the deep learning technique have shown that it is well suited to missing data. However, considerable training data and computational resources are needed for deep learning techniques, which may be an obstacle to their application in practice. Furthermore, their ignorance of concerning discrete missing data also downgrades their practicability.
MLP has good applicability to data processing, and it has been successfully used in missing data imputation [36,37]. Recently, the influences of missing mechanisms, missing rates, and specific applied domains have been further studied. Missing mechanisms can affect the imputation performance of different methods. To explore imputation performance in MAR, Fallah successfully utilized the MLP method to impute time series landfill gas data [38]. Śmieja et al. performed experiments with missing rates from 0.25% to 23.8% based on the MLP method for continuous data [25]. Luo et al. evaluated MLP with other competitive machine learning or statistical models for clinical data [39]. Lin et al. conducted research on the effect of data discretization for continuous data, where MLP and DBN were significantly superior to the mentioned baseline imputation methods [40]. To repair missing data for credit risk prediction, Yang et al. developed an ensemble MLP model with superior accuracy to the traditional machine learning model, which testified that repairing missing data can improve the model’s prediction ability [41]. However, more comprehensive consideration of missing mechanisms and missing rates is first required for wide application of the technique to imputing missing data. Discrete data are of great significance for studying, as insufficient processing may reduce data utilization and decision making. All of these concerns motivate us to propose a new scheme for discrete missing data to improve imputation performance, which theoretically is conducted on three missing mechanisms and five levels of missing proportions.
This section presents some fundamental concepts of missing data, especially regarding discrete missing data and missing patterns. The basic methodology of the missing data imputation technique is provided next. Finally, MLP and the gradient descent algorithm are accordingly introduced.

2.1. Discrete Data

There are two types of data, i.e., continuous and discrete data, where discrete attributes refer to attributes with a finite or infinite number of values represented with or without integers. Generally, discrete attributes include ordinal attributes, binary attributes, nominal attributes, etc. Most research concentrated on imputation methods for continuous missing data, such as regressions [42,43], decision trees [44], and deep learning techniques [42], but few studies offered solutions to handle discrete missing data. For instance, a dataset named Lymphography from the UCI Machine Learning Repository [45] used in this study contains an attribute named lymphatics, including four attribute values: normal, arched, deformed, and displaced. However, most machine learning techniques cannot directly deal with discrete missing data. If such discrete data enter an imputation model without processing, most such models will not perform well. Thus, it is essential to preprocess datasets containing discrete data. One-hot encoding is a common discretization technique based on binary coding which can effectively deal with the different types of data, expanding the feature space to some extent. Figure 1 shows the corresponding forms with one-hot encoding of the above example, where 1 represents the position of the discrete value in coding space In particular, the one-hot technique is also suitable for discrete integer values.

2.2. Basic Methodology for Missing Data Imputation

The missing mechanism or pattern is an inherent feature of the missing data. In [10], the missing pattern was theoretically classified into three categories, including missing completely at random (MCAR), missing at random (MAR), and not missing at random (NMAR). Specifically, MCAR means that the missing values occur randomly and do not generate deflection in the results, which have no relationship with the observed and unobserved data. MAR refers to a pattern in which missing values are related to the observed data, which suggests the missing data can be inferred from the existing data. The case in which missing data are related to the unobserved data can be classified as NMAR. This theory means that the missing values can be effectively imputed with some specific laws.
According to the definition of missing mechanisms, two primary type methods have been summarized in theory: statistics-based and machine-learning-based imputation methods. The former utilizes statistical principles to infer the missing values, and the mean and random are the two most representative approaches. Specifically, the mean imputation method chooses the mode or mean of the observed data to fill the missing values, and it has been widely applied in imputing missing values [10]. For numerical missing data, this method calculates the mean values of the observed corresponding attributes as the filling values of the missing values; for the nonnumerical case, it uses the mode as a substitute for the missing values. The random imputation method is another common solution to missing data in surveys [46]. Its imputation scheme depends on the probability of each feature value in the whole observed dataset. It randomly selects an observed value as the imputation result, which means that a more frequent value is more likely to be chosen to replace a missing value. Because machine learning techniques are highly efficient for sizeable data analysis, the second type of imputation method takes full advantage of machine learning. Several typical machine-learning-based imputation methods of filling missing values are as follows.
The k-nearest neighbors (KNN) technique can mine the latent patterns based on the similarity between samples. Therefore, the imputation method based on the KNN technique selects k complete samples closest to the missing samples from the whole complete sample set as candidate samples and takes the weighted average value of the observed values in the candidate samples as the filling value [47]. There are many metrics to determine the neighbors of the objective sample, and the Hamming distance has been chosen as the distance metric in many studies [48]. It counts the sum of the number of different positions of two strings of equal length, and its definition can be expressed as Equation (1):
H D = i = 1 m A i B i
where A and B are two samples involved in the calculation. A i and B i represent the i th feature of A and B , respectively, and their values might be 1 or 0. m represents the dimension of feature space. The Hamming distance describes the similarity of different samples, and the smaller the distance, the more reliable the filling value that is obtained. Unlike the imputation method using the KNN technique, another type of imputation method based on machine learning techniques regards the filling process as a classification task, aiming to figure out a similar pattern in missing data. Generally, all the incomplete features are divided into several subgroups, where each subgroup represents a classification target. The features without missing values are fed into a specific learning model for each target. The random forest [49] and decision tree algorithms [44] are two representative methods in this category.
Deep learning is the main branch of machine learning which is especially suitable for unstructured data, such as images and text documents [50]. However, only a few missing value imputation methods for tabular or structured data have been studied. The autoencoder is a typical deep learning technique with the same number of neurons in the input and output layer [19]. Because of its special neural network structure, an autoencoder is easy to implement, as it only needs to train the weight of a single network. The implementation steps of missing value imputation with an autoencoder are presented in detail below:
Step 1: Determine the network structure according to the object dataset;
Step 2: Split the dataset into the complete subset ( D c o m ) and the incomplete subset ( D m i s s );
Step 3: Take D c o m as the training set, and calculate the weights of the network;
Step 4: Prefill the incomplete subset. Take the samples in D c o m as the input of the training model, and the missing values can be filled with the predictions of the trained model.

2.3. Multilayer Perceptron

The multilayer perceptron (MLP) has been widely used in sizeable data processing, especially for images and text documents [40]. In general, it is composed of an input layer, an output layer, and multiple hidden layers, and each neuron between adjacent network layers is fully connected. A standard MLP model is shown in Figure 2.
An MLP utilizes a supervised learning technique called backpropagation during the training phase [40]. As shown in Figure 2, external data are directly transmitted to the next layer without any computational processing via the input unit. The neurons in the hidden and output layers are the computational units of the network. During the training stage, the input data are first weighted and summed with the bias parameter, and then the summed value is transferred to the activation function, and the output is obtained. Finally, the output neurons output the predictions of the model. As for the weight updating and error function, the gradient descent algorithm and the cross-entropy error are commonly utilized, which are also related to the characteristics of the object dataset. Specifically, the training process is as follows.
First, the input from a specific neuron can be expressed as a multiple-dimensional vector x i , and then the hidden unit outputs n e t i k are computed as Equation (2):
n e t i k = φ l = 1 S ω l k 1 x i l + b k 1
where the superscript (1) refers to the corresponding parameters in the first layer of the MLP. x i l indicates the l th attribute of x i in the whole set of S attributes. ω l k is the connected weight between the l th unit in the input layer and the k th unit in the hidden layer. b k 1 denotes the bias parameter of the k th unit in the hidden layer. The outputs summed have to activate via φ , which is a particular function, such as tanh x , s i g m o i d x , or r e l u x . Following that, all the n e t i k values are linearly combined and transformed by an output activate function ϑ . The output unit y i j is computed via Equation (3):
y i j = ϑ k = 1 K n e t i k 1 ω k j 2 + b j 2
where the superscript (2) refers to the corresponding parameters in the second layer of the MLP, and n e t i k 1 is the same as x i l in Equation (2).

2.4. The Gradient Descent Algorithm with Momentum

An MLP is a typical artificial neural network, consisting of several components, including neurons, weights, biases, and activation functions. As for training an MLP model, the main task is to determine the parameters between adjacent layers, including the connected weights and biases. Regarding calculating parameters as an optimization problem, one of the solutions is to make use of gradient descent. Specifically, all parameters are stochastically initialized at first. The model is then trained iteratively, continuously calculating the gradient and updating parameters until a specific condition is met (e.g., the error is less than a threshold, or the number of iterations reaches a threshold). Though the gradient descent technique has been widely used in parameter optimization, there are still several challenges involving it to be solved. One of the challenges is to overcome the risk of falling into a local optimum. Researchers have proposed several optimization algorithms to deal with this problem, including batch gradient descent, stochastic gradient descent, and mini-batch gradient descent. On the other hand, each update manipulation of the traditional descent algorithm in the MLP training process is based on the current position, slowing the convergence speed. Therefore, momentum gradient descent (MGD), as a kind of stochastic gradient descent algorithm, is introduced into the MLP model [29]. Incorporating the MGD algorithm, the parameters can be updated via Equations (4)–(6):
v ω k + 1 = β v ω k + 1 β ω k
v b k + 1 = β v b k + 1 β b k
ω k + 1 = ω k α v ω k + 1 , b k + 1 = b k α v b k + 1
where α is the learning rate; β is the momentum coefficient, and its default value is always set to 0.9. v is the momentum used to control the convergence speed, and its introduction combines all the previous ω and b via Equations (4) and (5).

3. The Proposed Method

Completing missing values using a machine learning technique aims to estimate the missing value by finding the correlations among attributes. Its goal is to develop prediction models for the missing values, which is generally regarded as a classification task. In this regard, building an accurate nonlinear prediction model for the unobserved set is the key to ensuring high imputation accuracy. The standard MLP architecture is chosen to impute discrete missing data in this study. The research framework and proposed scheme for discrete missing values are discussed in this section.

3.1. An Overview of the Proposed Method

Before the detailed discussion of the proposed methodology, an overall framework will be presented. Figure 3 exhibits a comprehensive overview of the proposed method, in which its process can be summarized as follows:
Step 1: The object dataset is divided into two parts: the observed or complete samples as subset D c o m , and the incomplete samples with missing attribute values as subset D m i s s . In the study, missing values are artificially simulated using complete datasets with different ratios for the three missing mechanisms;
Step 2: Discretizing the object data and ensuring the missing type are two preliminaries of our scheme. An MLP with momentum gradient descent, called an IMLP, is applied to fill in the missing values. The IMLP model generates alternative for the missing values after fully training;
Step 3: D c o m and D m i s s are combined to recover the origin dataset, and D m i s s would also be a complete dataset after Step 2;
Step 4: The imputation performance is measured by using different imputation methods, such as the mode, random, KNN, and AE.

3.2. The Imputation Scheme Based on IMLP

Briefly, the imputation scheme based on IMLP (ISB-IMLP) includes four steps, i.e., determination of the missing types, construction of the MLP, training of the model, and reconstruction of the incomplete dataset. The details of the IMLP model scheme are expounded on below.

3.2.1. The Determination of the Missing Types

The core methodology is to make use of the complete data to train the IMLP model and then predict the missing values utilizing the trained model. To conduct the simulation experiments, levels of noise are first artificially added to the object datasets; five different missing proportions are considered in this paper. Additionally, because there are three missing patterns (MAR, MCAR, and NMAR), each missing pattern is also artificially simulated according to its definition in this study. As shown in the research framework, the subset D c o m is chosen to train the IMLP model, and it is discretized via the one-hot encoding technique. According to the definition of missing mechanisms, there are usually multiple missing types in D m i s s . Consequently, it is essential to determine the missing types before employing the overall IMLP scheme. Figure 4 depicts an example with five missing types, where A i i = 1 , 2 , , n indicates the i th attribute of the incomplete dataset with n attributes. Moreover, the grid squares marked in black indicate the positions where the missing values appear.
Specifically, each row denotes an instance in the incomplete dataset. For the first instance, the black mark appears in A 1 , A 2 , and A 3 , which means this kind of missing type can be denoted as m t 1 : 1 , 2 , n ; in the second instance, the missing value is shown at A 2 ; the third instance includes two missing values whose positions are A 3 and A n . Figure 4 is just a basic example to illustrate the principle of identifying the missing type. The missing types of the object dataset for experiments can be determined based on the above method. As a result, the collection M T = m t 1 , m t 2 , , m t n denotes all n missing types in D m i s s . Note that all the illustrations are based on this example in this section.

3.2.2. The Construction of the IMLP

The input space of the multilayer perceptron corresponds to the feature space of the model input data, and the number of neurons in the input layer is equal to the feature dimensions of the input data. The neurons in the input layers are fully connected to other neurons in the next layer. The last layer in the MLP network is the output layer, whose dimensions correspond to the feature dimensions of the model output data. The neurons in the output layer are also fully connected to the neurons in the previous layer. The network layers between the input and output layers are called hidden layers. On the one hand, the MLP method commonly makes use of the backpropagation algorithm to figure out all the parameters. However, the full connection architecture leads to considerable parameters and gradient descent computations, which require consumption of time and resources. On the other hand, according to the principles of the neural network, all the connection weights and biases in the network structure are calculated and updated by a gradient descent algorithm, which may be limited by a local optimum. All the above concerns are our motivations to propose the IMLP method.
In this study, we firstly construct an MLP model to fill in discrete missing values. The activation function in the hidden layers is r e l u x , and s i g m o i d x is for the output layer. The input and output data are explained in the next subsection. The MLP model cannot straightforwardly deal with missing values, so prefilling the missing values through some specific approaches can not only allow more data to enter the model for the training on missing types, but also speed up the convergence. Strategically, the gradient descent with momentum is introduced to improve the performance of the MLP.

3.2.3. The Training of the Model

Because there is a set of string data in the object datasets, they may not be trained well by the standard MLP model without encoding manipulation. In addition, some attribute values are denoted as integers, but they are encoded from discrete data whose actual meanings are discrete. Generally, if a specific model directly trains the continuous data without processing to fill in the missing values, float results will be obtained as the filling values. However, the filling values obtained via the above method cannot find the corresponding discrete patterns, which means the results do not satisfy the actual demand. Therefore, before the model training starts, the input data are encoded by the one-hot technique.
Additionally, some prefilling strategies are needed to maintain the data size for model training, as the data size for deep learning is crucial to the training performance. To the best of our knowledge, imputing the missing values via statistical techniques without calculating distances or similarities between different samples has an agreeable performance for both accuracy and time cost. Compared with MAR and NMAR, the missing data arise without relevance to the observed or unobserved features in MCAR. In this study, the mode method is used as the prefilling strategy for the missing pattern of both MAR and NMAR, and the random method is selected to prefill the missing values in MCAR.
Figure 5 illustrates the overall architecture of the IMLP model. m t 1 is selected as an example of a missing type in Figure 4. Its missing positions are 1, 2, and n , which corresponds to the situation that data has missing values in the first, second, and last attributes. The IMLP model uses a complete dataset for training, in which features without missing values are the inputs of the model, and attributes corresponding to missing types are the outputs of the model. As shown in Figure 5, x i represents the i th sample in the object dataset. The elements in x i 3 , x i 4 , , x i n 1 are the input, while x 1 , x 2 , and x n are the output of the model. During the training process, the binary cross-entropy and momentum gradient descent algorithms are selected as the loss function and the optimizing strategy, respectively, to figure out the optimal parameters of the IMLP model. In a word, the fundamental law of the training is to utilize the observed attributes as input and the unobserved attributes as output to find the optimal network weights and biases, and each missing type can be trained similarly.

3.2.4. The Reconstruction of the Incomplete Dataset

During this phase, the main target is to utilize the trained IMLP model to predict the missing attribute values and fill them with the predicted results. Specifically, the first step is to determine all the missing types of the target dataset. The model then provides the corresponding IMLP model for the defined missing type, which makes use of the observed data to impute the unobserved data. The incomplete data will then be filled in one by one. Finally, the model outputs a complete dataset for subsequent operations.
In brief, the entire scheme can be illustrated in Figure 6, and the steps for filling in discrete missing data can be concisely summarized as follows.
First, the preliminary task is to determine the specific missing types for model development and training. The one-hot technique encodes the object data for discretizing. Each IMLP model is developed according to a specific missing type after data preprocessing. As a result, there is an IMLP set for imputing discrete missing data, where each model corresponds to a specific missing type. In fact, the discrete missing data imputation is regarded as a classification task in this paper. As for reconstructing the incomplete dataset, the alternatives to missing data are generated according to the IMLP set. Finally, the object dataset with missing data will be transformed into a complete dataset based on this scheme.

4. Experiments and Discussions

In this section, seven datasets collected from the UCI Machine Learning Repository are selected to verify the performance of different imputation methods with regard to classification accuracy. Additionally, the impacts of missing rates and missing mechanisms on imputation methods are also studied. Specifically, each dataset is conducted with three missing patterns (MAR, MCAR, and NMAR) and seven missing rates ranging from 10% to 30%, whose interval is 5%. In particular, the imputation performance of ISB-IMLP is compared with some standard imputation methods, i.e., the mode, random, and hot-deck imputation methods, as well as the imputation methods based on the k-nearest neighbors (KNN), decision tree (DT), random forest (RF), standard multilayer perceptron (MLP), and autoencoder (AE) techniques. Furthermore, the performances of the benchmark methods and the IMLP method on different missing patterns are evaluated, and the changing trends with different missing ratios are also researched. The performance evaluation is particularly performed based on classification accuracy by using several different classifiers on each dataset.

4.1. Experiment Setup

In this section, the details of the simulation experiments are represented in three aspects, i.e., platform, datasets, and other settings.

4.1.1. Experiment Platform

The experiments in the paper are based on a simulation platform, whose experimental settings are as follows: Windows 10, 64-bit operating system, AMD R7-4800H process, and 16 GB RAM. The programming language is Python 3.7, and the main libraries used are NumPy 1.21, Pandas 1.3.4, scikit-learn 1.0.2, Keras 2.8, and TensorFlow 2.8.

4.1.2. Dataset Description

Table 1 shows the characteristics of the datasets used in this section, including the number of data samples, attributes, and classes. These datasets are collected from different fields in the real world for binary or multi-class classification tasks, and they are all composed of discrete data or mixtures of discrete and continuous data. Particularly, there are some datasets in this table with missing values (i.e., Breast Cancer and Blood), which may reduce the effectiveness of the imputation method. Therefore, the pre-task is to remove the incomplete samples and obtain seven complete datasets. To simulate incomplete datasets and validate the performance of the proposed imputation method, each complete dataset is transformed into 15 variant datasets (three kinds of mechanisms, five kinds of missing rates). In this setting, it is convenient to evaluate the performance of imputation methods. Specifically, the complete dataset before deletion processing can be used as the control group, and the dataset after filling in the missing values can be used as the experimental group.

4.1.3. Other Settings

As for settings for levels of missing proportions, five levels of noise are artificially added to the datasets in Table 1, ranging from 10% to 30%. Actually, deleting the missing data with smaller missing proportions may be the most efficient manipulation. Higher missing proportions may be improper for our object datasets and insufficient for model training.
As for settings for experimental methods, the random method stochastically selects an observed value as the filling alternative, and the mode method statistically sets the most frequent value as the imputing option. Both of them are without extra settings. The hot-deck method aims to find a substitution for the missing value based on the similarity between different samples. For the parameters of the machine learning technique, the number of neighbors for KNN is set to 5, and the Hamming distance is chosen to measure the nearest neighbors. Those methods based on the decision tree and random forest model classifiers develop models between observed attributes and unobserved attributes, which regard missing value imputation as a classification task. Generally, their parameters are the defaults that scikit-learn provides. In particular, the decision tree technique experimented with is CART, which employs the Gini coefficient as the dividing evidence. For the parameters of the neural network, all the object neural networks are composed of three hidden layers, where each layer has 32 neurons, and the learning rate, batch size, and epochs are set as 0.001, 256, and 1000 respectively. We adapt stochastic gradient descent with momentum as the optimizer and set the momentum coefficient as 0.9. The prefilling method for IMLP is the mode imputation method in both MAR and NMAR, and the random in MCAR.
On the other hand, classifiers used to evaluate the performance may bring biases, which may be related to the characteristics of the datasets or the distribution of the data. All the other hyper-parameters are set to their default values. In [11], some learning algorithms, including © Bayes (NB), k-nearest neighbors (KNN), and support vector machine (SVM), were considered to have biases on some specific datasets or data. In addition, the decision tree (DT) method has good performance for multi-class tasks. Therefore, NB, KNN, SVM, and DT are selected as different classifiers to verify the robustness of the imputation methods.

4.2. Experiment Analysis

4.2.1. The Performance of Missing Value Imputation

According to the research framework in Section 3, this section aims to execute the last research phase, i.e., measure and evaluate the performance of different imputation methods compared with the imputation scheme based on IMLP (ISB-IMLP). Some statistical or machine learning techniques, including the mode, random, hot-deck, KNN, decision tree, random forest, MLP, and AE methods, are selected to fill the missing data as a control group. To evaluate the imputation performance of the different methods, classification accuracy based on the SVM classifier for the seven real-world datasets under three missing mechanisms is the primary evaluation metric. Note that all the results are average accuracies after 10-fold cross-validation, where the training and testing set ratio is 9:1. To eliminate the chance of erroneous results, the standard deviations of the five-times classification task are calculated and placed after the accuracy. All the results are presented in decimal form to simplify the calculation and analysis. Moreover, the classification accuracy based on the origin dataset without adding any noise is also obtained in comparison with the experimental method. In this section, SVM is selected as the learning algorithm for classification.
Table 2 presents the average classification accuracy with five missing proportions obtained based on the MLP and IMLP imputation models, where the experiments are conducted in MAR. The last row in this table represents the accuracy obtained from the clean dataset with the same operation as the others. Figure 7 visualizes the comparisons of accuracy obtained from MLP, IMLP, and the clean dataset in MCAR and NMAR. The accuracy obtained from the clean dataset acts as a standard for the others, where the closer to it, the better performance is. On the one hand, the classification accuracy obtained from the IMLP model is about 0.01 or 0.02 higher than that of the MLP model. This shows that our modification improved the model’s ability to fill in discrete missing data compared with the standard MLP model. On the other hand, the average classification accuracy is also close to the accuracy obtained from the unprocessed dataset.
When the object dataset is Breast Cancer or Blood, both IMLP and MLP have higher accuracy than the clean dataset, which contains missing data at first. This means that filling in missing data has a positive effect on improving performance. When the experimental dataset is Car Evaluation or Balance Scale, the difference from the standard accuracy is higher than for the other sets, which means that missing data significantly downgrade the classification performance in such a dataset. Generally, IMLP provides a better ability to fill in missing data comprehensively, which shows that the momentum descent algorithm and prefilling strategy have a positive effect on optimizing the performance of MLP.
Table 3 shows the complete experimental results for the scenario of MAR, while Table 4 and Table 5 present each imputation method’s average accuracy for five missing proportions on the object dataset for MCAR and NMAR, respectively. For Table 3, each row contains nine accuracies and standard deviations in a certain missing proportion for a specific dataset, where the bold represents the best accuracy obtained by the corresponding imputation method for each dataset with specific missing proportion. For Table 4 and Table 5, each row contains the accuracies obtained from different datasets based on the corresponding imputation method, where the bold represents the best classification accuracy for each dataset.
MAR means that the missing values occur randomly, which is related to the observed complete samples or attribute values. In this regard, statistical or machine learning techniques are feasible for imputing missing values in theory. From this situation, Table 2 indicates that ISB-IMLP has good performance in most circumstances, especially for the Lymphography, Breast Cancer, Blood, and Contraceptive Method Choice datasets. As for the other test objects, the classification accuracy acquired from the proposed method is closer to the best one than the others. For example, when the object dataset is Car Evaluation, the mode imputation approach outperforms the others at the missing proportions of 0.1, 0.15, and 0.3. However, our method is as close as possible, and the corresponding differences are only 0.0057, 0.0044, and 0.0052. For the 35 sets of experimental results, the proposed method is 54.29% accurate in these cases against the 11.43% accuracy of its best competitor mode and KNN.
Gathering the results obtained using five missing proportions and seven datasets, the best, second-best, and worst average accuracy values are 71.52%, 70.04%, and 68.95%, respectively, and the corresponding methods are IMLP, decision tree, and KNN. Although imputing missing data via KNN obtained great performance for continuous data in much of the previous literature [12,13,30], the method’s ability to handle discrete missing data is not as good as its ability to handle continuous data. According to the results, IMLP brings 1.48% improvement compared with CART, and it was found that CART was suitable for discrete missing data in [26]. Because this statistical technique has low computational and training costs, filling in discrete missing data via this statistical technique is faster than schemes based on machine learning or deep learning techniques. Compared to the other experimental techniques, our imputation approach has computational cost for good training. However, IMLP provides overall superior classification performance. The combination of the statistical imputation technique and gradient descent algorithm makes models converge, providing better classification performance after filling in missing discrete data. The IMLP model performs better compared with the imputation scheme based on the standard MLP model, with the average classification accuracy of the IMLP model being 0.0177 higher. This indicates that IMLP provides improvements to MLP. Generally, it verifies that the IMLP model with a prefilling operation has better imputation performance in MAR.
MCAR means the missing values occur completely randomly, while NMAR refers to the case in which missing values are related to unobserved attribute values. For the case of the missing patterns MCAR and NMAR, Table 4 and Table 5 show an average performance on seven datasets of five levels of noise, respectively. As shown in Table 4, our method obtains the best classification results except in the case of the Lymphography dataset, which testifies that the proposed approach is also good for the MCAR missing pattern. Moreover, imputing missing values with the KNN technique also has good performance in this simulation situation. Especially for the Lymphography dataset, the KNN imputation method obtains the best classification accuracy. To statistically evaluate the improvement of ISB-IMLP, the average aggregating accuracy on seven datasets obtained from the MLP and IMLP models are 0.6991 and 0.7111, respectively; the latter average accuracy is 1.2% higher than the former.
As shown in Table 5, ISB-IMLP outperforms the other eight imputation methods by 71.43%. Though there are two excluded datasets, i.e., Blood and Balance Scale, their accuracies handled with the proposed method rank second only to the best one. Concretely, their differences from the best one are only 0.0004 and 0.0027, respectively. However, the best imputation methods for the Blood and Balance Scale datasets are mode and decision tree. On the one hand, these two methods are also acceptable for the experimental datasets in this missing pattern. On the other hand, our approach still has a better overall performance in NMAR.
To illustrate stability visually, the boxplots indicate the classification accuracies obtained from nine experimental methods in Figure 8. The object datasets are Car Evaluation and Blood. According to the boxplots, ISB-IMLP has a longer box than the others in most situations. However, its mean, maximum, and minimum are comprehensively higher than those of the other methods. This indicates that the missing proportion affects the imputation performance. Moreover, the results show the specific standard deviation for the corresponding situation after the symbol ‘±’. In general, the proposed method may have no lowest standard deviation in most cases. Comparing the imputation methods based on statistical and machine learning techniques, the standard deviations of the methods based on deep learning methods (i.e., AE, MLP, and IMLP) are always higher than the two others, for which the reason may be the sample size or the distribution of data. Actually, these machine learning techniques with neural networks are sensitive to the scale of the training samples; insufficient training data will lead to underfitting. Compared with the standard MLP model, the results of our method are smooth as a whole.
In [11,33], the average classification accuracy obtained utilizing different imputation methods with different missing rates was selected to show an overall changing trend of continuous missing data. Figure 9 exhibits the average SVM classification accuracy based on different imputation models, where the x-axis and y-axis represent missing rates and average accuracy for each line chart, respectively. A higher missing rate means more missing information for the classification learning algorithm. Thus, the average accuracy decreases as the missing proportion increases, regardless of the imputation method. However, there are several excluded cases in which the average accuracy increases with the addition of the missing proportion. For example, when the missing ratio is 30% in NMAR, the accuracy of the proposed method increases compared to the accuracy of the 25% missing proportion. One reason could be that the artificial operation for filling missing values impacts the classification task positively. In particular, the accuracy in MAR and MCAR is around 2% higher than the accuracy in NMAR, which means that it may be harder to deal with the missing data in NMAR. As a whole, our method outperforms many classical imputation methods for discrete missing data, according to these line charts. Though the mode and random methods have fast speeds of convergence for discrete missing data imputation, ISB-IMLP gives a robust and higher classification accuracy for datasets with discrete missing data, albeit with a larger time cost. As for schemes for filling discrete missing data based on model learning, ISB-IMLP optimizes the convergence of the model and provides stable performance for each missing mechanism.

4.2.2. The Imputation Performance of Different Classifiers

The selection of the classification learning algorithm always affects the final performance. Consequently, many studies aim to investigate the influence of the choice of different classifiers on imputation performance [11,40,51]. Figure 10 presents the integrated average accuracy for five missing proportions in the missing mechanism of MCAR. As for the reason for evaluating in the case of such a missing mechanism, the missing value appears completely at random, which may be unstable in imputation performance. The x-axis is the name of the object datasets, while the y-axis is the aggregated accuracy. Our proposed method does not necessarily achieve the best accuracy overall according to the histogram, which means it is advisable to select a concrete learning algorithm for specific datasets. Specifically, the Naive Bayes classifier may not be the first choice for the Car Evaluation dataset, where its overall accuracy is lower than 70%, while the other three obtain accuracies over 70%. Moreover, all the imputation methods using the Naive Bayes learning algorithm for the Blood dataset have low accuracies, and the others are 30% higher. It is also evident that the SVM and Naive Bayes models are more suitable for the Balance Scale dataset, where their accuracies are about 5% higher than the accuracies acquired by the decision tree and KNN models. Note that ISB-IMLP with a decision tree for classification has lower superiority than the others in datasets with small sizes of feature numbers, where the distribution of the dataset has a negative effect on ISB-IMLP. In this regard, the selection of classifiers plays a significant role in classification tasks. On the other hand, it is found that our method combined with SVM can obtain the overall best performance; an object dataset with a small number of samples, where the missing values are filled in by the imputation methods using statistical techniques, is generally better than the proposed method. One reason could be the deficiency of the training sample, which might not converge in the finite iterations. Consequently, it is also necessary to determine suitable methods for datasets of different sizes.
However, it is difficult to conclude which learning algorithm and imputation method perform the best. The reasons can be summarized as follows. First, the uncertainty of data distribution and the difficulty of determining it affect the imputation performance for discrete data. Second, different learning algorithms have distinctive biases on the final results, which would lead to a high difference between the two classifiers. Moreover, the missing pattern might be another factor affecting the results. In conclusion, our modified MLP model can reach acceptable performance in most situations for discrete missing values, which could enhance imputation theory in the future.

5. Conclusions

The paper concentrates on developing an approach to fill in discrete missing data and applying it to real-world classification tasks; an imputation scheme based on IMLP (ISB-IMLP) is proposed. Specifically, the standard MLP method combined with gradient descent and definite prefilling plans is regarded as an approach for filling in discrete missing data. ISB-IMLP first develops classification models for each missing type, and the generated alternatives to missing types are gathered to reconstruct the incomplete dataset. To explore the effect of missing mechanisms, missing proportions, and selection of the learning algorithm, the performance of ISB-IMLP is evaluated through experiments, which are conducted on seven real-world datasets under three missing mechanisms (MAR, MCAR, and NMAR); the results are compared with those of eight typical imputation methods. Moreover, experimental comparisons were made using five missing proportions ranging from 10% to 30%. The baseline imputation methods were the mode, random, hot-deck, KNN, decision tree, random forest, autoencoder, and standard MLP techniques. The simulation shows that ISB-IMLP has superior performance to MLP for each missing mechanism, with the former’s classification accuracy being around 1% or 2% higher than the latter’s. Compared to the statistics-based methods, ISB-IMLP has positive effects on imputation performance without consideration of the time cost. According to the results, it is a challenge to find a general method for missing situations. The reasons for this issue may be considered as the missing mechanism and the missing proportion, the selection of the classification algorithm, and the data size for model development. As a whole, ISB-IMLP performs well under three missing mechanisms in most situations, offering a practical approach for discrete missing data.
There are several unresolved issues that could be researched further in the future. First, it is difficult to find an imputation method applicable to all missing mechanisms, whether based on statistical theory or machine learning techniques, which will be one of the main tasks of future missing value imputation research. Moreover, missing values affect the classification accuracy and the efficiency of data processing techniques such as feature selection. Given the above analysis, missing value imputation has several challenges to overcome in the future and has extensive research implications.

Author Contributions

Conceptualization, Z.Y.; methodology, H.P., J.Y. and C.Y.; software, H.P. and C.Y.; validation, Z.Y. and Q.H.; formal analysis, H.P.; investigation, H.P. and C.Y; resources, Z.Y.; data curation, H.P., R.L. and J.S.; writing—original draft preparation, Z.Y., C.Y. and H.P.; writing—review and editing, X.L. and Q.H.; visualization, H.P.; supervision, Z.Y. and J.S.; project administration, Z.Y.; funding acquisition, Z.Y. and Q.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant Nos. 61502155, 61772180; funded by the Fujian Provincial Key Laboratory of Data Intensive Computing and Key Laboratory of Intelligent Computing and Information Processing, Fujian No. BD201801; and funded by Wuhan Science and Technology Bureau 2022 Knowledge Innovation Dawning Plan Project, No. 2022010801020270.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tsai, C.-F.; Li, M.-L.; Lin, W.-C. A Class Center Based Approach for Missing Value Imputation. Knowl.-Based Syst. 2018, 151, 124–135. [Google Scholar] [CrossRef]
  2. ShakorShahabi, R.; Qarahasanlou, A.N.; Azimi, S.R.; Mottahedi, A. Application of Data Mining in Iran’s Artisanal and Small-Scale Mines Challenges Analysis. Resour. Policy 2021, 74, 102337. [Google Scholar] [CrossRef]
  3. Li, D.; Zhang, H.; Li, T.; Bouras, A.; Yu, X.; Wang, T. Hybrid Missing Value Imputation Algorithms Using Fuzzy C-Means and Vaguely Quantified Rough Set. IEEE Trans. Fuzzy Syst. 2022, 30, 1396–1408. [Google Scholar] [CrossRef]
  4. Nijman, S.; Leeuwenberg, A.; Beekers, I.; Verkouter, I.; Jacobs, J.; Bots, M.; Asselbergs, F.; Moons, K.; Debray, T. Missing Data Is Poorly Handled and Reported in Prediction Model Studies Using Machine Learning: A Literature Review. J. Clin. Epidemiol. 2022, 142, 218–229. [Google Scholar] [CrossRef]
  5. Abu-Soud, S.M. A Novel Approach for Dealing with Missing Values in Machine Learning Datasets with Discrete Values. In Proceedings of the 2019 International Conference on Computer and Information Sciences (ICCIS), Aljouf, Saudi Arabia, 10–11 April 2019; pp. 1–5. [Google Scholar] [CrossRef]
  6. Lin, W.-C.; Ke, S.-W.; Tsai, C.-F. When Should We Ignore Examples with Missing Values? Int. J. Data Warehous. Min. 2017, 13, 53–63. [Google Scholar] [CrossRef]
  7. Christopher, S.Z.; Siswantining, T.; Sarwinda, D.; Bustaman, A. Missing Value Analysis of Numerical Data Using Fractional Hot Deck Imputation. In Proceedings of the 2019 3rd International Conference on Informatics and Computational Sciences (ICICoS), Semarang, Indonesia, 29–30 October 2019; pp. 1–6. [Google Scholar] [CrossRef]
  8. Krause, R.W.; Huisman, M.; Steglich, C.; Snijders, T.A.B. Missing Network Data A Comparison of Different Imputation Methods. In Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Barcelona, Spain, 28–31 August 2018; pp. 159–163. [Google Scholar] [CrossRef] [Green Version]
  9. Biessmann, F.; Salinas, D.; Schelter, S.; Schmidt, P.; Lange, D. “Deep” Learning for Missing Value Imputationin Tables with Non-Numerical Data. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management; CIKM ’18, Turin, Italy, 22–26 October 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 2017–2025. [Google Scholar] [CrossRef]
  10. Lin, W.-C.; Tsai, C.-F. Missing Value Imputation: A Review and Analysis of the Literature (2006–2017). Artif. Intell. Rev. 2020, 53, 1487–1509. [Google Scholar] [CrossRef]
  11. Khan, H.; Wang, X.; Liu, H. Missing Value Imputation through Shorter Interval Selection Driven by Fuzzy C-Means Clustering. Comput. Electr. Eng. 2021, 93, 107230. [Google Scholar] [CrossRef]
  12. Faisal, S.; Tutz, G. Imputation Methods for High-Dimensional Mixed-Type Datasets by Nearest Neighbors. Comput. Biol. Med. 2021, 135, 104577. [Google Scholar] [CrossRef]
  13. Sanjar, K.; Bekhzod, O.; Kim, J.; Paul, A.; Kim, J. Missing Data Imputation for Geolocation-Based Price Prediction Using KNN–MCF Method. ISPRS Int. J. Geo-Inf. 2020, 9, 227. [Google Scholar] [CrossRef] [Green Version]
  14. Mishra, A.; Naik, B.; Srichandan, S.K. Missing Value Imputation Using ANN Optimized by Genetic Algorithm. Int. J. Appl. Ind. Eng. 2018, 5, 41–57. [Google Scholar] [CrossRef] [Green Version]
  15. Wang, G.; Lu, J.; Choi, K.-S.; Zhang, G. A Transfer-Based Additive LS-SVM Classifier for Handling Missing Data. IEEE Trans. Cybern. 2020, 50, 739–752. [Google Scholar] [CrossRef]
  16. Lu, C.-B.; Mei, Y. An Imputation Method for Missing Data Based on an Extreme Learning Machine Auto-Encoder. IEEE Access 2018, 6, 52930–52935. [Google Scholar] [CrossRef]
  17. Silva-Ramírez, E.-L.; Pino-Mejías, R.; López-Coello, M. Single Imputation with Multilayer Perceptron and Multiple Imputation Combining Multilayer Perceptron and K-Nearest Neighbours for Monotone Patterns. Appl. Soft Comput. 2015, 29, 65–74. [Google Scholar] [CrossRef]
  18. Lin, J.; Li, N.; Alam, M.A.; Ma, Y. Data-Driven Missing Data Imputation in Cluster Monitoring System Based on Deep Neural Network. Appl. Intell. 2020, 50, 860–877. [Google Scholar] [CrossRef] [Green Version]
  19. Pereira, R.C.; Santos, M.S.; Rodrigues, P.P.; Abreu, P.H. Reviewing Autoencoders for Missing Data Imputation: Technical Trends, Applications and Outcomes. J. Artif. Intell. Res. 2020, 69, 1255–1285. [Google Scholar] [CrossRef]
  20. Yoon, J.; Jordon, J.; Schaar, M. GAIN: Missing Data Imputation Using Generative Adversarial Nets. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; PMLR: Stockholm, Sweden, 2018; pp. 5689–5698. [Google Scholar]
  21. Choudhury, S.J.; Pal, N.R. Imputation of Missing Data with Neural Networks for Classification. Knowl.-Based Syst. 2019, 182, 104838. [Google Scholar] [CrossRef]
  22. Gad, I.; Hosahalli, D.; Manjunatha, B.R.; Ghoneim, O.A. A Robust Deep Learning Model for Missing Value Imputation in Big NCDC Dataset. Iran J. Comput. Sci. 2021, 4, 67–84. [Google Scholar] [CrossRef]
  23. Cheng, C.-Y.; Tseng, W.-L.; Chang, C.-F.; Chang, C.-H.; Gau, S.S.-F. A Deep Learning Approach for Missing Data Imputation of Rating Scales Assessing Attention-Deficit Hyperactivity Disorder. Front. Psychiatry 2020, 11, 673. [Google Scholar] [CrossRef]
  24. Jung, S.; Moon, J.; Park, S.; Rho, S.; Baik, S.W.; Hwang, E. Bagging Ensemble of Multilayer Perceptrons for Missing Electricity Consumption Data Imputation. Sensors 2020, 20, 1772. [Google Scholar] [CrossRef] [Green Version]
  25. Śmieja, M.; Struski, Ł.; Tabor, J.; Zieliński, B.; Spurek, P. Processing of Missing Data by Neural Networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Montréal, QC, Canada, 2018; Volume 31. [Google Scholar]
  26. Tsai, C.-F.; Hu, Y.-H. Empirical Comparison of Supervised Learning Techniques for Missing Value Imputation. Knowl. Inf. Syst. 2022, 64, 1047–1075. [Google Scholar] [CrossRef]
  27. Xu, X.; Chong, W.; Li, S.; Arabo, A.; Xiao, J. MIAEC: Missing Data Imputation Based on the Evidence Chain. IEEE Access 2018, 6, 12983–12992. [Google Scholar] [CrossRef]
  28. Wang, H.; Yuan, Z.; Chen, Y.; Shen, B.; Wu, A. An Industrial Missing Values Processing Method Based on Generating Model. Comput. Netw. 2019, 158, 61–68. [Google Scholar] [CrossRef]
  29. Liu, W.; Chen, L.; Chen, Y.; Zhang, W. Accelerating Federated Learning via Momentum Gradient Descent. IEEE Trans. Parallel Distrib. Syst. 2020, 31, 1754–1766. [Google Scholar] [CrossRef] [Green Version]
  30. Yan, C.; Yuan, J.; Ye, Z.; Yang, Z. A Discrete Missing Data Imputation Method Based on Improved Multi-Layer Perceptron. In Proceedings of the 2021 11th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Krakow, Poland, 22–25 September 2021; pp. 480–484. [Google Scholar] [CrossRef]
  31. Emmanuel, T.; Maupong, T.; Mpoeleng, D.; Semong, T.; Mphago, B.; Tabona, O. A Survey on Missing Data in Machine Learning. J. Big Data 2021, 8, 140. [Google Scholar] [CrossRef]
  32. Raja, P.S.; Thangavel, K. Missing Value Imputation Using Unsupervised Machine Learning Techniques. Soft Comput. 2020, 24, 4361–4392. [Google Scholar] [CrossRef]
  33. Lim, S.; Kim, S.J.; Park, Y.; Kwon, N. A Deep Learning-Based Time Series Model with Missing Value Handling Techniques to Predict Various Types of Liquid Cargo Traffic. Expert Syst. Appl. 2021, 184, 115532. [Google Scholar] [CrossRef]
  34. Zhang, Y.; Zhou, B.; Cai, X.; Guo, W.; Ding, X.; Yuan, X. Missing Value Imputation in Multivariate Time Series with End-to-End Generative Adversarial Networks. Inf. Sci. 2021, 551, 67–82. [Google Scholar] [CrossRef]
  35. Li, Y.; Bao, T.; Chen, H.; Zhang, K.; Shu, X.; Chen, Z.; Hu, Y. A Large-Scale Sensor Missing Data Imputation Framework for Dams Using Deep Learning and Transfer Learning Strategy. Measurement 2021, 178, 109377. [Google Scholar] [CrossRef]
  36. Abdella, M.; Marwala, T. The Use of Genetic Algorithms and Neural Networks to Approximate Missing Data in Database. In Proceedings of the IEEE 3rd International Conference on Computational Cybernetics, ICCC 2005, Le Victoria Hotel, Mauritius, 13–16 April 2005; pp. 207–212. [Google Scholar] [CrossRef] [Green Version]
  37. Jerez, J.M.; Molina, I.; Garcıa-Laencina, P.J.; Alba, E.; Ribelles, N.; Martın, M.; Franco, L. Missing Data Imputation Using Statistical and Machine Learning Methods in a Real Breast Cancer Problem. Artificial Intell. Med. 2010, 11, 105–115. [Google Scholar] [CrossRef]
  38. Fallah, B.; Ng, K.T.W.; Vu, H.L.; Torabi, F. Application of a multi-stage neural network approach for time-series landfill gas modeling with missing data imputation. Waste Manag. 2020, 116, 66–78. [Google Scholar] [CrossRef]
  39. Luo, Y. Evaluating the State of the Art in Missing Data Imputation for Clinical Data. Brief. Bioinform. 2022, 23, bbab489. [Google Scholar] [CrossRef] [PubMed]
  40. Lin, W.-C.; Tsai, C.-F.; Zhong, J.R. Deep Learning for Missing Value Imputation of Continuous Data and the Effect of Data Discretization. Knowl.-Based Syst. 2022, 239, 108079. [Google Scholar] [CrossRef]
  41. Yang, M. Repair Missing Data to Improve Corporate Credit Risk Prediction Accuracy with Multi-Layer Perceptron. Soft Comput. 2022, 1–12. [Google Scholar] [CrossRef]
  42. Sefidian, A.M.; Daneshpour, N. Missing Value Imputation Using a Novel Grey Based Fuzzy C-Means, Mutual Information Based Feature Selection, and Regression Model. Expert Syst. Appl. 2019, 115, 68–94. [Google Scholar] [CrossRef]
  43. Karmitsa, N.; Taheri, S.; Bagirov, A.; Mäkinen, P. Missing Value Imputation via Clusterwise Linear Regression. IEEE Trans. Knowl. Data Eng. 2022, 34, 1889–1901. [Google Scholar] [CrossRef]
  44. Nikfalazar, S.; Yeh, C.-H.; Bedingfield, S.; Khorshidi, H.A. Missing Data Imputation Using Decision Trees and Fuzzy Clustering with Iterative Learning. Knowl. Inf. Syst. 2020, 62, 2419–2437. [Google Scholar] [CrossRef]
  45. Dua, D.; Graff, C. UCI Machine Learning Repository; University of California, School of Information and Computer Science: Irvine, CA, USA, 2019; Available online: http://archive.ics.uci.edu/ml (accessed on 1 September 2021).
  46. Lo, A.W.; Siah, K.W.; Wong, C.H. Machine Learning with Statistical Imputation for Predicting Drug Approvals. SSRN 2019, 60. [Google Scholar] [CrossRef]
  47. Santos, M.S.; Abreu, P.H.; Wilk, S.; Santos, J. How Distance Metrics Influence Missing Data Imputation with K-Nearest Neighbours. Pattern Recognit. Lett. 2020, 136, 111–119. [Google Scholar] [CrossRef]
  48. Doust, I.; Robertson, G.; Stoneham, A.; Weston, A. Distance Matrices of Subsets of the Hamming Cube. Indag. Math. 2021, 32, 646–657. [Google Scholar] [CrossRef]
  49. Alsaber, A.R.; Pan, J.; Al-Hurban, A. Handling Complex Missing Data Using Random Forest Approach for an Air Quality Monitoring Dataset: A Case Study of Kuwait Environmental Data (2012 to 2018). Int. J. Environ. Res. Public. Health 2021, 18, 1333. [Google Scholar] [CrossRef]
  50. Pouyanfar, S.; Sadiq, S.; Yan, Y.; Tian, H.; Tao, Y.; Reyes, M.P.; Shyu, M.-L.; Chen, S.-C.; Iyengar, S.S. A Survey on Deep Learning: Algorithms, Techniques, and Applications. ACM Comput. Surv. 2019, 51, 1–36. [Google Scholar] [CrossRef]
  51. Spinelli, I.; Scardapane, S.; Uncini, A. Missing Data Imputation with Adversarially-Trained Graph Convolutional Networks. Neural Netw. 2020, 129, 249–260. [Google Scholar] [CrossRef]
Figure 1. An example of the one-hot encoding technique in the Lymphography dataset.
Figure 1. An example of the one-hot encoding technique in the Lymphography dataset.
Sensors 22 05645 g001
Figure 2. A basic MLP model framework.
Figure 2. A basic MLP model framework.
Sensors 22 05645 g002
Figure 3. The overall workflow of the proposed IMLP imputation technique.
Figure 3. The overall workflow of the proposed IMLP imputation technique.
Sensors 22 05645 g003
Figure 4. An example of five missing types.
Figure 4. An example of five missing types.
Sensors 22 05645 g004
Figure 5. IMLP model for m t i .
Figure 5. IMLP model for m t i .
Sensors 22 05645 g005
Figure 6. Overall process of ISB-IMLP.
Figure 6. Overall process of ISB-IMLP.
Sensors 22 05645 g006
Figure 7. Average classification accuracy obtained from MLP, IMLP imputation methods and unprocessed dataset: (a) in MCAR and (b) in NMAR.
Figure 7. Average classification accuracy obtained from MLP, IMLP imputation methods and unprocessed dataset: (a) in MCAR and (b) in NMAR.
Sensors 22 05645 g007
Figure 8. Boxplot of accuracies for five missing proportions for the Car Evaluation and Blood datasets in MAR, MCAR, and NMAR. (ac) for the Car Evaluation dataset in MAR, MCAR, and NMAR, respectively. (df) for the Blood dataset in MAR, MCAR, and NMAR, respectively. For the element of boxplot, the solid square is the position of mean value, the hollow diamond is the position of 1%, and × represents the 99% of the box.
Figure 8. Boxplot of accuracies for five missing proportions for the Car Evaluation and Blood datasets in MAR, MCAR, and NMAR. (ac) for the Car Evaluation dataset in MAR, MCAR, and NMAR, respectively. (df) for the Blood dataset in MAR, MCAR, and NMAR, respectively. For the element of boxplot, the solid square is the position of mean value, the hollow diamond is the position of 1%, and × represents the 99% of the box.
Sensors 22 05645 g008aSensors 22 05645 g008b
Figure 9. Average classification accuracies of SVM for different imputation models in three missing patterns: (a) in MAR; (b) in MCAR; (c) in NMAR.
Figure 9. Average classification accuracies of SVM for different imputation models in three missing patterns: (a) in MAR; (b) in MCAR; (c) in NMAR.
Sensors 22 05645 g009
Figure 10. Performance analysis of four classifiers for nine imputation methods in MCAR. (a) for SVM; (b) for decision tree; (c) for naïve Bayes; and (d) for KNN.
Figure 10. Performance analysis of four classifiers for nine imputation methods in MCAR. (a) for SVM; (b) for decision tree; (c) for naïve Bayes; and (d) for KNN.
Sensors 22 05645 g010
Table 1. Details of UCI datasets used in the experiments.
Table 1. Details of UCI datasets used in the experiments.
Dataset NameNo. of SamplesNo. of FeaturesNo. of Classes
Blood (B)74852
Breast Cancer (BC)28692
Balance Scale (BS)62545
Car Evaluation (CE)172864
Contraceptive Method choice (CMC)147393
Lymphography (LYM)148184
Tic-Tac-Toe (TTT)95892
Table 2. Average classification accuracy obtained from MLP, IMLP imputation methods and unprocessed dataset in MAR.
Table 2. Average classification accuracy obtained from MLP, IMLP imputation methods and unprocessed dataset in MAR.
CELYMBCBTTTCMCBS
MLP0.755060.746680.710740.76160.7006750.428880.77778
IMLP0.766260.785080.72620.77520.722150.440540.7887
Origin0.8758160.8066640.702340.7619240.764340.4710.910028
Table 3. Classification accuracy and standard deviation with SVM classifier on seven datasets in MAR missing pattern.
Table 3. Classification accuracy and standard deviation with SVM classifier on seven datasets in MAR missing pattern.
DatasetRatesMethod
ModeRandomAEKNNRFDTMLPHot-DeckIMLP
CE0.10.8092 ± 0.00640.7909 ± 0.00880.7983 ± 0.00640.7864 ± 0.01090.7914 ± 0.0140.7861 ± 0.00460.792 ± 0.01330.7768 ± 0.01370.8035 ± 0.0127
0.150.7853 ± 0.00840.7766 ± 0.00930.7706 ± 0.01270.7711 ± 0.01460.7605 ± 0.00560.7719 ± 0.00930.7786 ± 0.01360.7711 ± 0.01010.7809 ± 0.0104
0.20.7536 ± 0.00550.752 ± 0.00630.7534 ± 0.01330.7569 ± 0.01130.7561 ± 0.00960.7644 ± 0.01180.7673 ± 0.0150.7569 ± 0.0030.785 ± 0.0217
0.250.7393 ± 0.01590.7204 ± 0.02060.7373 ± 0.0130.6953 ± 0.0060.7362 ± 0.00370.7428 ± 0.00750.7242 ± 0.02060.739 ± 0.01450.7347 ± 0.006
0.30.7324 ± 0.00750.6971 ± 0.0130.7184 ± 0.01550.7117 ± 0.01520.6938 ± 0.00960.7093 ± 0.0130.7132 ± 0.01350.6957 ± 0.0120.7272 ± 0.0126
LYM0.10.7453 ± 0.06440.776 ± 0.01530.7746 ± 0.02960.792 ± 0.0780.8227 ± 0.01670.8053 ± 0.03380.7747 ± 0.05060.7733 ± 0.03230.78 ± 0.0145
0.150.7387 ± 0.02330.7814 ± 0.05780.8027 ± 0.02040.7587 ± 0.0260.784 ± 0.02380.7987 ± 0.02470.7773 ± 0.02430.8067 ± 0.03020.8267 ± 0.0352
0.20.7547 ± 0.03570.7267 ± 0.01330.72 ± 0.0460.7947 ± 0.03280.7933 ± 0.02360.7533 ± 0.03830.728 ± 0.03310.7667 ± 0.0380.7587 ± 0.0159
0.250.7213 ± 0.04430.74 ± 0.05310.6987 ± 0.03690.7587 ± 0.02420.7773 ± 0.05550.7653 ± 0.05630.724 ± 0.02430.7413 ± 0.0420.78 ± 0.0293
0.30.6667 ± 0.04030.7387 ± 0.04410.7093 ± 0.04540.7347 ± 0.01280.736 ± 0.04230.7773 ± 0.05860.7294 ± 0.04730.7347 ± 0.02720.78 ± 0.0445
BC0.10.7179 ± 0.03960.7062 ± 0.03210.6848 ± 0.02020.7014 ± 0.02670.7069 ± 0.02610.7028 ± 0.02090.7124 ± 0.02730.7145 ± 0.02220.7483 ± 0.0317
0.150.7241 ± 0.01760.7131 ± 0.03240.72 ± 0.02150.7028 ± 0.02070.6917 ± 0.03180.7083 ± 0.02090.6993 ± 0.02470.689 ± 0.03110.7345 ± 0.0252
0.20.6966 ± 0.01770.7034 ± 0.02050.7055 ± 0.02540.6993 ± 0.01360.6924 ± 0.02470.7007 ± 0.02570.6993 ± 0.03670.6856 ± 0.00790.7241 ± 0.0343
0.250.6938 ± 0.02680.7152 ± 0.02590.6917 ± 0.00720.7021 ± 0.04360.7124 ± 0.01490.7083 ± 0.0270.7103 ± 0.01440.7255 ± 0.02460.7069 ± 0.0155
0.30.7 ± 0.03110.7076 ± 0.02060.6986 ± 0.02190.7028 ± 0.02630.7055 ± 0.02620.7083 ± 0.03280.7324 ± 0.01350.6862 ± 0.01950.7172 ± 0.0214
B0.10.764 ± 0.01070.7635 ± 0.01160.7552 ± 0.01280.7675 ± 0.0070.7691 ± 0.01210.7627 ± 0.01160.7576 ± 0.00990.7616 ± 0.01430.7773 ± 0.0116
0.150.7618 ± 0.02140.7504 ± 0.01360.7523 ± 0.00920.7613 ± 0.00690.7579 ± 0.00580.7573 ± 0.0120.7629 ± 0.01490.7549 ± 0.00930.784 ± 0.0142
0.20.756 ± 0.01780.7768 ± 0.02010.7453 ± 0.01740.7691 ± 0.01410.764 ± 0.01720.7597 ± 0.01340.7656 ± 0.01380.7602 ± 0.01770.7653 ± 0.0119
0.250.7637 ± 0.00610.7683 ± 0.02240.7613 ± 0.01540.7544 ± 0.01050.7651 ± 0.01250.7547 ± 0.0160.7643 ± 0.0150.7675 ± 0.01330.7707 ± 0.0137
0.30.756 ± 0.0120.7731 ± 0.01080.7547 ± 0.01060.7683 ± 0.01490.7752 ± 0.0120.7637 ± 0.01090.7576 ± 0.01670.7629 ± 0.01680.7787 ± 0.0133
TTT0.10.7281 ± 0.00560.7125 ± 0.01920.7167 ± 0.01340.7369 ± 0.01070.7304 ± 0.01790.6971 ± 0.01340.7288 ± 0.01980.7169 ± 0.01460.73235 ± 0.0144
0.150.716 ± 0.00990.7021 ± 0.02330.7071 ± 0.00790.7156 ± 0.00990.6894 ± 0.0110.6908 ± 0.00890.7127 ± 0.00720.715 ± 0.01020.7344 ± 0.0293
0.20.699 ± 0.01930.7138 ± 0.01560.7065 ± 0.0130.7152 ± 0.02080.6827 ± 0.01110.6933 ± 0.01550.6946 ± 0.00690.6965 ± 0.01640.7312 ± 0.0229
0.250.7075 ± 0.01780.7056 ± 0.01240.7 ± 0.00780.7108 ± 0.01410.7017 ± 0.02130.6933 ± 0.01910.6981 ± 0.01830.719 ± 0.01720.7115 ± 0.014
0.30.7083 ± 0.01860.7177 ± 0.020.6933 ± 0.01460.6975 ± 0.00840.6892 ± 0.01920.7021 ± 0.01190.6973 ± 0.01810.7029 ± 0.0170.7115 ± 0.0132
CMC0.10.4384 ± 0.01940.4382 ± 0.01190.4464 ± 0.01260.4478 ± 0.00980.4496 ± 0.0160.4488 ± 0.02310.4355 ± 0.01370.4389 ± 0.01910.4507 ± 0.004
0.150.4328 ± 0.00730.4428 ± 0.01530.4331 ± 0.00270.4295 ± 0.02060.431 ± 0.01210.4343 ± 0.0120.4305 ± 0.00770.4285 ± 0.01360.4574 ± 0.0135
0.20.4311 ± 0.00710.4321 ± 0.0120.4305 ± 0.01420.4324 ± 0.0080.4301 ± 0.00970.4228 ± 0.01530.4212 ± 0.01780.4282 ± 0.01150.4405 ± 0.0155
0.250.4205 ± 0.0180.4216 ± 0.02330.4211 ± 0.00740.435 ± 0.01550.4268 ± 0.01210.4343 ± 0.01590.428 ± 0.00840.4316 ± 0.00850.425 ± 0.0056
0.30.4304 ± 0.02010.4293 ± 0.00840.4242 ± 0.01040.4327 ± 0.02350.4292 ± 0.01780.4209 ± 0.01320.4292 ± 0.00510.4315 ± 0.00970.4291 ± 0.0063
BS0.10.8381 ± 0.02130.8311 ± 0.01180.8359 ± 0.02170.8162 ± 0.02010.7984 ± 0.01890.8412 ± 0.01720.8495 ± 0.01920.8257 ± 0.00660.8556 ± 0.0178
0.150.8187 ± 0.01550.8076 ± 0.01820.799 ± 0.02380.7794 ± 0.0190.7981 ± 0.02140.7984 ± 0.01940.8181 ± 0.01530.7832 ± 0.01270.8079 ± 0.0095
0.20.7848 ± 0.02150.7594 ± 0.01490.7914 ± 0.01890.7632 ± 0.01460.7632 ± 0.00720.7616 ± 0.01840.7683 ± 0.02140.7682 ± 0.01510.7816 ± 0.012
0.250.7607 ± 0.01070.7527 ± 0.01440.7489 ± 0.03080.7467 ± 0.00880.76 ± 0.01830.7429 ± 0.01010.7425 ± 0.01530.7473 ± 0.01250.7651 ± 0.0287
0.30.727 ± 0.01280.7238 ± 0.01760.7229 ± 0.02570.6994 ± 0.01710.7143 ± 0.01350.7308 ± 0.00950.7105 ± 0.02030.7105 ± 0.02390.7333 ± 0.0092
Table 4. Average classification accuracy and standard deviation with SVM classifier on seven datasets in MCAR missing pattern.
Table 4. Average classification accuracy and standard deviation with SVM classifier on seven datasets in MCAR missing pattern.
CELYMBCBTTTCMCBS
Mode0.75852 ± 0.01050.76852 ± 0.034640.70428 ± 0.032540.75982 ± 0.016320.68676 ± 0.011740.44646 ± 0.013560.75436 ± 0.0142
Random0.74708 ± 0.013640.75734 ± 0.0270.69408 ± 0.02280.7606 ± 0.016580.69762 ± 0.017820.44012 ± 0.00960.76818 ± 0.0202
AE0.754 ± 0.00950.74132 ± 0.033560.71076 ± 0.019780.76226 ± 0.013480.69058 ± 0.018880.44796 ± 0.01380.77404 ± 0.0208
KNN0.7389 ± 0.010060.77866 ± 0.0330.71448 ± 0.026180.76748 ± 0.013140.6844 ± 0.013460.4506 ± 0.012840.74948 ± 0.0135
RF0.74284 ± 0.007660.7776 ± 0.041840.69862 ± 0.022040.76182 ± 0.012340.68316 ± 0.01350.43884 ± 0.012880.7626 ± 0.01586
DT0.74338 ± 0.00910.7736 ± 0.020960.69932 ± 0.029260.7632 ± 0.01480.67418 ± 0.017120.44734 ± 0.012580.76342 ± 0.0147
MLP0.7471 ± 0.012640.76612 ± 0.03920.70124 ± 0.020720.76326 ± 0.010220.69496 ± 0.017820.44776 ± 0.012460.77308 ± 0.0116
Hot-deck0.7405 ± 0.00910.76 ± 0.030580.70068 ± 0.029820.75942 ± 0.014260.68732 ± 0.011820.44484 ± 0.010140.75808 ± 0.0166
IMLP0.76368 ± 0.015340.77252 ± 0.037740.73378 ± 0.030060.7701 ± 0.010940.70292 ± 0.019760.45986 ± 0.013640.77484 ± 0.0145
Table 5. Average classification accuracy and standard deviation with SVM classifier on seven datasets in NMAR missing pattern.
Table 5. Average classification accuracy and standard deviation with SVM classifier on seven datasets in NMAR missing pattern.
CELYMBCBTTTCMCBS
Mode0.73736 ± 0.009220.73998 ± 0.030760.7044 ± 0.022920.76386 ± 0.015380.70026 ± 0.016420.44612 ± 0.010380.7626 ± 0.01542
Random0.74374 ± 0.010440.74614 ± 0.036960.70606 ± 0.024420.76618 ± 0.012380.68308 ± 0.019880.44062 ± 0.014560.74516 ± 0.0152
AE0.75258 ± 0.012180.76586 ± 0.03740.69972 ± 0.021820.75872 ± 0.011560.68342 ± 0.0170.43874 ± 0.012340.7328 ± 0.02184
KNN0.73324 ± 0.008160.79254 ± 0.032960.70554 ± 0.02790.75942 ± 0.013260.67988 ± 0.014660.4434 ± 0.011340.7356 ± 0.01344
RF0.74816 ± 0.008780.77278 ± 0.03890.7 ± 0.024980.76134 ± 0.016680.6856 ± 0.015660.43828 ± 0.012960.7381 ± 0.01398
DT0.74102 ± 0.014640.7792 ± 0.037380.7091 ± 0.024340.76796 ± 0.013140.67914 ± 0.01760.43882 ± 0.010940.73034 ± 0.015
MLP0.75176 ± 0.01140.77494 ± 0.039360.69656 ± 0.022840.7669 ± 0.013620.68902 ± 0.014280.44292 ± 0.012920.7413 ± 0.01892
Hot-deck0.73094 ± 0.012460.7864 ± 0.021860.7007 ± 0.021440.76584 ± 0.015860.6798 ± 0.011520.4472 ± 0.014420.7215 ± 0.01444
IMLP0.75594 ± 0.009960.79868 ± 0.036140.70936 ± 0.020640.76756 ± 0.015340.70302 ± 0.020560.44898 ± 0.01270.7599 ± 0.01774
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pan, H.; Ye, Z.; He, Q.; Yan, C.; Yuan, J.; Lai, X.; Su, J.; Li, R. Discrete Missing Data Imputation Using Multilayer Perceptron and Momentum Gradient Descent. Sensors 2022, 22, 5645. https://doi.org/10.3390/s22155645

AMA Style

Pan H, Ye Z, He Q, Yan C, Yuan J, Lai X, Su J, Li R. Discrete Missing Data Imputation Using Multilayer Perceptron and Momentum Gradient Descent. Sensors. 2022; 22(15):5645. https://doi.org/10.3390/s22155645

Chicago/Turabian Style

Pan, Hu, Zhiwei Ye, Qiyi He, Chunyan Yan, Jianyu Yuan, Xudong Lai, Jun Su, and Ruihan Li. 2022. "Discrete Missing Data Imputation Using Multilayer Perceptron and Momentum Gradient Descent" Sensors 22, no. 15: 5645. https://doi.org/10.3390/s22155645

APA Style

Pan, H., Ye, Z., He, Q., Yan, C., Yuan, J., Lai, X., Su, J., & Li, R. (2022). Discrete Missing Data Imputation Using Multilayer Perceptron and Momentum Gradient Descent. Sensors, 22(15), 5645. https://doi.org/10.3390/s22155645

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop