Next Article in Journal
Stable and Fast Planar Jumping Control Design for a Compliant One-Legged Robot
Previous Article in Journal
Development of Iron Nanoparticles (FeNPs) Using Biomass of Enterobacter: Its Characterization, Antimicrobial, Anti-Alzheimer’s, and Enzyme Inhibition Potential
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cross-Domain Active Learning for Electronic Nose Drift Compensation

1
WESTA College, Southwest University, Chongqing 400715, China
2
College of Artificial Intelligence, Southwest University, Chongqing 400715, China
3
Brain-Inspired Computing and Intelligent Control of Chongqing Key Laboratory, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Micromachines 2022, 13(8), 1260; https://doi.org/10.3390/mi13081260
Submission received: 15 June 2022 / Revised: 25 July 2022 / Accepted: 3 August 2022 / Published: 5 August 2022
(This article belongs to the Special Issue Electronic Noses: Principles and Applications)

Abstract

:
The problem of drift in the electronic nose (E-nose) is an important factor in the distortion of data. The existing active learning methods do not take into account the misalignment of the data feature distribution between different domains due to drift when selecting samples. For this, we proposed a cross-domain active learning (CDAL) method based on the Hellinger distance (HD) and maximum mean difference (MMD). In this framework, we weighted the HD with the MMD as a criterion for sample selection, which can reflect as much drift information as possible with as few labeled samples as possible. Overall, the CDAL framework has the following advantages: (1) CDAL combines active learning and domain adaptation to better assess the interdomain distribution differences and the amount of information contained in the selected samples. (2) The introduction of a Gaussian kernel function mapping aligns the data distribution between domains as closely as possible. (3) The combination of active learning and domain adaptation can significantly suppress the effects of time drift caused by sensor ageing, thus improving the detection accuracy of the electronic nose system for data collected at different times. The results showed that the proposed CDAL method has a better drift compensation effect compared with several recent methodological frameworks.

1. Introduction

Electronic noses (E-noses) are sensor intelligence systems in the field of artificial olfaction that mimic the olfactory system of mammals to identify or measure gas samples. A complete E-nose system typically consists of three components: a gas sensor array, a data pre-processing unit, and a pattern recognition algorithm module. An E-nose has sensitivity to complex gas mixtures/compounds as well as analytical capabilities, thus allowing it to accurately identify complex gas samples. With the development of artificial olfaction, E-noses are also becoming increasingly important in fields such as [1], environmental monitoring [2], biomedical detection [3], and medical diagnostics [4]. However, drift with the sensor aging, which occurs over time, is still unavoidable due to the sensitivity change in the manufacturing and use process of gas sensors [5,6]. Although the drift of the E-nose sensors is progressively larger over time, this drift is nonlinear, irregular, and unpredictable and is not directly measurable through time variation. In addition, the effect of sensor aging on different types of gases is also different, which leads to the difference of sensitivity change of the sensors to different gases, and consequently results in the situation that the E-nose data do not have consistent drift trend. As mentioned above, sensor drift in the E-nose system is very complex and difficult to measure directly. It can significantly reduce the recognition accuracy of gas sensors and cause many problems in practical applications. Therefore, a suitable and effective method of drift compensation for E-noses is required to address this common problem in the field of artificial olfaction.
Due to drift, it will lead to an inconsistent distribution of source and target domain samples, so the model trained with source domain samples will not obtain good classification performance for target domain samples. Modelling by continuously acquiring a large number of calibration samples with known labels at different times is a time-consuming and laborious task. In fact, a small number of labeled target domain samples contain knowledge of the target domain sample distribution. Joint modelling with a large number of labeled source domain samples and a small number of labeled target domain samples can discern information about the differences in the distribution of target and source domain data due to sensor drift, thus greatly improving the classification accuracy of the model for the target domain samples. Therefore, acquiring some of the labeled target domain samples at a small additional cost, thereby greatly improving the prediction accuracy of a large amount of unlabeled target domain data, is an acceptable low-cost method to address electronic nose drift.
In this study, we aimed to mitigate the effects of distorted data distribution and degraded performance of the E-nose due to sensor drift. We sought to combine an active learning approach and domain adaptation to complementarily enhance the sensor’s performance in the identification of drifting gases. Specifically, we designed a semi-supervised cross-domain active learning (CDAL) model for drift compensation of the E-nose. As shown in Figure 1, for the CDAL method, we used the AL paradigm based on the query by committee (QBC) model to calculate the Hellinger distance (HD) of the distribution of predicted outcomes for each target domain sample committee member, which was used to measure the ease of differentiation of the selected samples. Additionally, we measured the interdomain distribution difference of the data by calculating the maximum mean difference (MMD) of each target domain sample with respect to the source domain center. Finally, we weighted the HD and MMD as the final sample selection criterion, by which the drift compensation samples are selected for the update of the classification model. The new CDAL method considers both the information content of the selected samples and the migration differences and improves the drift compensation performance of the sensor through a combination of domain adaptation and active learning. The method was able to reflect the maximum amount of drift information with a minimum number of labeled samples. To the best of our knowledge, the combination of active learning and domain adaptation has not been used for drift compensation of E-nose sensors.
The advantages of CDAL over other methods can be summarized as follows:
(1)
The framework consists of a combination of active learning and domain adaptation, which makes full use of the distribution differences in migration distributions of samples between different domains while considering the degree of disagreement among committee members on the classification results of the sample.
(2)
HD is used as a measure of the divergence of the enquiry committee’s output and is more appropriate for sample selection than the usual discrete judgement criteria.
(3)
The MMD is able to measure the distributional variability between the source and target domains, greatly facilitating the selection of more representative samples for modification of the classification model in active learning.
(4)
The mapping of Gaussian functions allows us to represent the distance between distributions by the inner product of points, which can be used to assess the impact of differences in the distribution of data due to drift.
(5)
Experiments show that CDAL is significantly effective in both drift suppression and pattern recognition.
The paper is divided into the following sections: Section 2 reviews related work on active learning. Section 3 introduces the method proposed in this paper, including formula derivation and the optimization process. Section 4 describes the experimental setup, the experimental results, and the sensitivity analysis of the parameters. Finally, Section 5 contains the conclusion of the full text.

2. Related Work

2.1. Review of Drift Compensation of E-Nose

Due to the influence of the external environment or the ageing of the sensor itself, poisoning and other generated E-nose drift can produce irregular interference with the response of the gas sensor, thus reducing the recognition accuracy of the E-nose system. In recent years, much work has been done to solve the drift problem of E-nose sensors. These methods are roughly divided into three main types: component correction strategies, domain correction strategies, and classifier strategies.
The component correction strategies aim to identify drift components in the original signal and then remove them before training the final discriminative classifier. This approach is primarily based on feature or component removal from sample data. Haugen et al. [7] proposed a mathematical drift compensation algorithm based on corrected samples that maintains the true characteristics of the data and removes sensor drift from the measurement sequence. Regarding drift calibration, most of the current studies in the literature use direct standardization (DS) methods to map the signals from the slave device to the space of the master device to convert the data from the slave device to match the data from the master device [8,9]. Kermit and Tomic [10] proposed a correction model called Independent Component Analysis (ICA), which used higher order statistical methods to analyze gas sensor system data, eliminating the relative components with drift characteristics. Among others, the composition correction method using principal component analysis (CCPCA) attempts to reconstruct the sensor response of a pure gas without drift effects [11]. Ziyatdinov et al. [12] proposed a general PCA method that calculated the drift directions of all classes. In addition to component correction methods, Padilla et al. [13] used orthogonal signal correction (OSC) to find drift elements orthogonal to the label space. As the sensor drift of the E-nose system is nonlinear, irregular, and unpredictable because of the inherent uncontrollability in the sensor manufacturing and use process, we cannot find the exact direction of drift change or there is no fixed direction of drift change at all. Thus, the performances of component correction methods are not satisfactory.
The domain correction strategy is a model that learns from the source domain data distribution and yields good performance on different (but related) target domain data distributions and is well suited for use as drift compensation for the E-nose. Since E-nose drift is nonlinear and unstable, Tao et al. [14] proposed a kernel transformation method to perform domain correction operations, which improves the consistency of the distribution between the source and target domains. Zhang et al. [5] proposed a domain regularized component analysis (DRCA) method to project the source domain data and the target domain drift data, compensating for the drift by making the distributions of the two projected subspaces similar. Zhang et al. also proposed a cross-domain discriminative subspace learning (CDSL) method that achieves drift compensation while ensuring data integrity [15]. Tian et al. [16] proposed the local manifold embedding cross-domain subspace learning (LME-CDSL) algorithm, which is a unified subspace learning model combined with manifold learning and domain adaptation. Wang et al. [17] proposed an extreme learning machine (ELM) with discriminative domain reconstruction, which can improve the classification efficiency of the E-nose by differentiating each domain data and learning a domain-invariant space to minimize the distribution differences between different domains. Yi et al. proposed a unified two-layer drift compensation framework to solve the sensor drift problem considering the distribution alignment of different domains in the feature and decision layers [18]. Yan et al. proposed subspace alignment based on an ELM (SAELM) for E-nose drift compensation, which achieves domain alignment by constructing a uniform feature representation space under multiple criteria [19]. For a combination of convolutional neural networks, Zhang et al. proposed a target-domain-free domain adaptation convolutional neural network (TDACNN), which integrates the use of different levels of embedding features, using intermediate features between the two domains for drift compensation [20]. However, domain correction methods usually require finding a common domain invariant subspace. This is an extremely complex design and requires researchers to develop various indexes for domain alignment, which is not easy to implement.
The aim of classifier strategies is to design robust classifiers to achieve a better discriminative output in E-nose drift compensation. In turn, classifier methods are divided into single classifier methods and classifier integration methods. The single classifier approach is a drift compensation model with a specific method using a single classifier as the discriminant model. Zhang et al. [21] proposed a method based on a domain adaptation extreme learning machine (DAELM), in which a labeled sample of a part of the target domain similar to active learning was used as a reference. Ma et al. [22] proposed a weighted domain transfer extreme learning machine which uses clustering of samples as a criterion to select suitably labeled samples and calculates a sensitivity matrix by weighting them to achieve drift compensation using fewer labeled samples. Tian et al. [23] proposed a Gaussian deep belief classification network (GDBCN) for use as E-nose drift compensation, which compensates for sensor drift at the decision level by cascading a DBN-SoftMax classifier layer based on Gaussian–Bernoulli restricted Boltzmann machines. The integrated approach combines the advantages of multiple single classifiers so that the final integrated classifier outperforms any single component classifier and can greatly improve recognition accuracy. Vergara et al. [24] innovatively proposed a weighted ensemble approach for the base classifier support vector machine (SVM) that solves the gas sensor drift problem over long periods of time while achieving high accuracy. Magna et al. [25] proposed an adaptive classifier integration method to improve the performance of fault and drift sensors for prediction by majority voting decisions. Liu et al. [26] proposed a fitting-based dynamic classifier approach for metal oxide gas sensor integration using a dynamic weighted combination of SVM classifiers trained from datasets collected over different time periods to obtain better classification results. A regularized ensemble of classifiers for sensors was proposed by Verma et al. [27] to apply regularization to the weighted integration of classifiers used as drift compensation. Zhao et al. [28] proposed an integrated model of multiple classifiers based on an improved LSTM and SVM, which improved the performance of the classifier to a greater extent. Rehman et al. [29] proposed a multiclassifier tree model with transient features, where each node uses a different classifier group for integrated classification. Compared with the feature level, it is much more difficult to achieve cross-domain adaptation at the decision level, because the design of a robust classifier itself is an arduous task, which often needs to use labeled target domain samples to implement a domain adaptation classifier for drift compensation. However, if the labeled target domain samples are blindly selected, the correct distribution knowledge of the target domain cannot be obtained, resulting in the failure of the design of the cross-domain classifier.

2.2. Review of Active Learning

The AL process is a closed loop consisting of two sample data sets L and U, which is shown in Figure 2. U denotes unlabeled target domain data (sample query set) for drift compensation sample selection, while L indicates the drift compensation set (drift compensation samples with labels) for “machine learning model” C updating. The “selected samples” L are selected from the “query set” U, which is full of target domain drifting instances, and the “labels” of the “selected samples” are sent to the “expert” S query. The “labels” of the “selected samples” are labeled by “expert” queries. The most critical of these is the “instance selection strategy” Q, which requires a suitable rule to select the most representative instances from the data pool for retraining the “machine learning model” C. Finally, the “machine learning model” C is updated with the selected instances and labels for the next recognition. Therefore, the AL framework is a distinct closed-loop structure that updates the “machine learning model” C with “selected instances” L and their “labels”, thus improving the model’s recognition accuracy.
In recent years, active learning methods with finite labeled samples have also started to be applied to the drift compensation of E-nose sensors. The core of the AL approach is its “instance selection strategy” Q, which aims to reflect the maximum information with the minimum number of samples. Liu et al. proposed an active learning method based on dynamic clustering to balance the labels of different categories of labeled samples by dynamic clustering [30]. Liu et al. also proposed a hybrid kernel-based adaptive active learning approach by designing a hybrid sample evaluation kernel to perform a comprehensive evaluation of the labeled samples [31]. More recently, Li et al. [32] proposed a method combining classifier integration and active learning to reduce the cost of model training by reducing the number of labeled samples and to better suppress the drift of gas sensors. Considering the class-imbalance problem of sample selection, a new metric “classifier state” and an associated sample evaluation procedure are proposed to be used as drift compensation for the E-nose, which successfully reduces the negative impact of the class imbalance problem by using a classifier state sampling strategy [33]. Considering the problem of noisy labeling in active learning, Cao et al. proposed a label evaluation method based on the active learning framework to evaluate and correct noisy labels and improve active learning labeling efficiency [34].
However, AL methods are still in their infancy, and most of the AL frameworks proposed only consider the amount of information contained in the samples while ignoring the effect of misaligned sample data distribution between domains due to drift. In this regard, we proposed the CDAL framework based on the HD and MMD for E-nose drift compensation.

3. Methodology

3.1. Notations

Suppose X S = [ x S 1 , , x S N S ] D × N S denotes the dataset of source domain. D represents the number of features and N s denotes the number of samples in source domain. Suppose C S = [ c S 1 , , c S N S ] D × N S denotes the label set of the source domain. Similarly, suppose   X T = [ x T 1 , , x T N T ] D × N T denotes the source domain dataset. N T represents the number of samples in the target domain. Suppose C T = [ c T 1 , , c T N S ] D × N T denotes the label set of the target domain. · denotes the L 2 -norm.

3.2. Cross-Domain Active Learning Approach

Unlike the traditional pool-based AL method, the CDAL method requires the calculation of the information value of the sample and the distribution difference from the source domain from two separate perspectives.
First, we calculated the HD to measure the divergence of the selected samples. For two discrete probability distributions P = { p i , i = 1 , , n } , Q = { q i , i = 1 , , n } , the HD between them is defined as Equation (1).
  H ( p , q ) = 1 2 i = 1 n ( p i q i ) 2   ,
For computational purposes, we can also think of this as the Euclidean distance between two vectors of square roots of probability distributions as Equation (2).
  H ( p , q ) = 1 2 P Q 2   ,
By definition, the HD is a metric satisfying triangle inequality. The 2 in the definition ensures that H ( p , q ) [ 0 , 1 ]   for all probability distributions. Considering that we used the QBC method in our framework, we need to calculate the sum of the HD between each pair of committee members as the ultimate disagreement. In this regard, we can obtain a H ( x T j ) for each target domain sample   x T j   as Equation (3).
H ( x T j ) = 1 2 j = 1 K h = j K P j Q h 2 ,
where K denotes the number of committee members, and P j and Q h denote the probability distribution of the j -th and h -th committee members for target domain sample x T j . As shown in Equation (4), the total number of pairs in the probability distribution is related to the value of K.
N K = K ( K 1 ) 2 ,
where N K denotes the total number of pairs in the probability distribution. To highlight the variability of the sample when selecting later, we used the sum of the HD obtained by the committee members directly for the calculation. H ( x T j ) [ 0 , N K ] .
We used HD to measure the information value of the selected samples, and next we introduce the MMD to solve the problem of interdomain data distribution differences when selecting samples.
Since we need to calculate the migration distance MMD between each target domain sample and the center of the source domain, the formula for calculating the MMD in this paper is defined as Equation (5).
  MMD ( X S , x T j ) 2 = 1 N S i = 1 N S φ ( x S i ) φ ( x T j ) 2 2 ,
where φ ( · ) denotes the mapping function.
The key to calculating the MMD is to find a suitable mapping function φ ( · ) that can map the sample space to a higher dimensional feature space. For this, we first expand Equation (5) to Equation (6).
MMD ( X S , x T j ) 2 = 1 N s 2 i = 1 N s i = 1 N s φ ( x S i ) φ ( x S i ) T 2 N S   i = 1 N S φ ( x S i ) φ ( x T j ) T + φ ( x T j ) φ ( x T j ) T ,
where φ ( · ) T represents the transpose after vector mapping.
According to kernel function theory, the inner product of two vectors in a high-dimensional eigenspace can be found from the kernel function in the original space without knowing the mapping function φ ( x ) φ ( y ) . Therefore, using the kernel function k ( x , y ) , Equation (6) can be transformed into Equation (7).
MMD ( X S , x T j ) = [ 1 N s 2 i = 1 N s i = 1 N s k ( x S i , x S i ) 2 N S i = 1 N s k ( x S i , x T j ) + k ( x T j , x T j ) ] 1 2 ,
where k ( x , y ) denotes the kernel function, which refers specifically to the Gaussian kernel function in this paper.
Considering the mapping space dimensionality of the data distribution, we use the Gaussian kernel function as the kernel function for the MMD calculation:
  k ( x , y ) = e x y 2 2 δ 2 ,
where δ is an adjustable parameter in the Gaussian kernel function.
The larger MMD between the target domain samples and the center of the source domain indicates the larger difference between the two-domain sample distributions. In other words, the larger the MMD is, the more difficult it is to distinguish. Therefore, we need to calculate the MMD between each target domain sample and the source domain to select the instance with the largest MMD for labeling.
The MMD represents the difference in data distribution between the sample and the source domain, and the HD represents the ease of sample differentiation. A larger HD means that the sample is more difficult to classify correctly, which means that the sample drifts more and reflects the target domain information. A larger MMD indicates a greater degree of migration between the target domain data and the center of the source domain. Therefore, we need to consider the impact of HD and MMD on sample selection in combination. We used an adjustable parameter ω 1 to compromise between HD and MMD. The weighted sum S c o r e j of the two terms shown as Equation (9) was used as the final criterion for sample selection.
  S c o r e j = ω 1 MMD ( X S , x T j ) + ( 1 ω 1 ) H ( x T j ) .
Since we needed to maximize both HD and MMD, we aimed to obtain the samples with larger S c o r e j as the drift compensation set. Therefore, we picked the target domain samples with the first N maximum weighted sums for labeling based on the pool-based approach and added them to the training set. Then, we removed them from the target domains. This cross-domain active learning mode can select the most representative samples and greatly improve the recognition accuracy of the classifier.
For a more intuitive representation of our model, we summarized and reorganized CDAL in Algorithm 1.
Algorithm 1 Proposed CDAL Algorithm
Input: sample query set U, training set { X S , C S } , test set { X T , C T } .
n: number of samples in each query set U.
N: number of selected samples.
Output: updated training set { X S 1 , C S 1 } ,updated test set { X T 1 , C T 1 } .
1: Initialize U = X T .
2: for i = 1: N do
3: Calculate the HD H ( x T j ) for each sample in U by the probability distribution of the committee members’ predictions through Equation (3).
4: Calculate the MMD between each sample in the query set U and the entire training set X S through Equation (5).
5: Calculate the weighted sum S c o r e j of HD and MMD with optimized weight parameter   ω 1   through Equation (9).
6: end for
7: Select N samples with the largest   S c o r e j from U as the selected sample set X N .
8: Label X N with category C N   by experts.
9: Update X S , C S : X S n e w X S     X N , C S n e w C S C N .
10: Update X T ,   C T :   X T n e w X T / X N , C T n e w C T / C N .
11: Return training set { X S n e w , C S n e w } , test set { X T n e w , C T n e w } .

4. Experiments and Results

4.1. Dataset

We used a public benchmark dataset of the time drift of the gas sensor array from UCSD [24] for the experimental test, which is widely used by researchers in the field of E-noses for drift compensation study. This comprehensive and rich sensor drift dataset was collected over a period of 36 months on a gas delivery platform. The data were recorded by the E-nose system, which contains 16 MOS sensor arrays (four commercial series TGS2600, TGS2602, TGS2610, and TGS2620), recording a total of 13,910 data samples from exposure to six different gas concentrations, including ammonia, acetaldehyde, acetone, ethylene, ethanol, and toluene. Eight features were extracted from each sensor, and accordingly, each observation was a 128-dimensional (16 × 8) vector. The dataset was then divided into 10 batches according to the month based on the time series collected. Detailed information about the datasets is shown in Table 1. Readers interested in the experimental details of obtaining the dataset can refer to Ref. [24] for further details.
To visualize the data drift, we performed principal component analysis on 10 batches of data, mapped the data into a two-dimensional principal into molecular space and plotted the corresponding scatter plots. The PCA for each batch is shown in Figure 3. It is clear that the distribution of the scatter plots plotted from each batch varies considerably, and the reason for this lies in the irregular and time-varying nature of the sensor drift. Therefore, it is reasonable and necessary to compensate for drift from a pattern recognition perspective.

4.2. Experimental Setup

To test the compensation effect of our method on sensor drift with the dataset, we took the following two experimental setups:
Setting 1: For the long-term drift of the sensor, we used batch 1 data as drift-free training data, and each remaining batch as drift data for testing.
Setting 2: For the short-term drift of the sensor, we took batch k (k = 1, 2, …, 9) as the training data without drift and batch k + 1 as the test data with drift.
To validate the effectiveness of our CDAL method, we used the QBC method as the active learning query method. For the classifiers, we used the SVM with good classification performance as the chair and the SoftMax classifiers, which can output probabilities, as committee members to evaluate the information of samples. Considering the recognition accuracy, we used the Gaussian kernel for the SVM classifier and performed a grid search for the two parameters c   and γ . The range of search for parameter c was [ 10 0 , 10 15 ], and the range of search for parameter γ was [ 10 10 , 10 5 ], both set to a step size of 10 1 . We initialized the selected sample number N to 30. We set the parameter δ in the Gaussian kernel mapping function to 8, which is calculated by δ = D / 2 based on previous experience. Additionally, we optimized the weighted parameter ω 1   in the interval [0.01,0.99] with a step size of 0.01.

4.3. Experimental Results under Setting 1

In Setting 1, we used 12 of the latest drift compensation methods for comparison. First, two single classifiers, SVM and ELM, were used to demonstrate the performance of the classifiers without drift compensation. Note that we used SVM as the baseline classifier for other drift compensation methods. Next, four component correction models, DS [8,9], CCPCA [11], OSC [13], and a generalized least squares weighting (GLSW) [35] method; two typical migration learning methods, DRCA [5] and CDSL [15]; and three methods for picking samples, AL-KLD, AL-JSD, and AL-HD, were compared. The results are shown in Table 2, and the optimal parameters corresponding to the CDAL methods are shown in Table 3.
To make the results more visible, we marked in bold the best results for each batch. Based on the data in Table 2, we can conclude that:
(1)
Our CDAL method has the best recognition results among all the compared methods under the same experimental conditions, and the average recognition accuracy is almost 10% higher than all the other methods.
(2)
The direct use of SVM and ELM classification was the worst. Among them, the migration learning methods CDSL and DRCA were able to achieve recognition accuracy of 69.59% and 58.72%, which was slightly better than that of the baseline methods. This indicates that the introduction of domain adaptation considering the interdomain distribution problem can improve the E-nose drift compensation effect.
(3)
The recognition accuracy of the AL-KLD, AL-JSD, AL-HD, and CDAL methods that used sample selection methods were all above 65%, which was significantly better than the other methods. This also shows that using a small amount of target domain data for labeling can greatly improve the classification accuracy, which is worthwhile and effective.
We also optimized the SVM parameters c and γ and the weighted parameters ω 1 involved in Setting 1, and the results are shown in Table 3.

4.4. Experimental Results under Setting 2

For the short-term drift pattern of the E-nose (Setting 2), we used the same 13 methods as for Setting 1 as a comparison. From the results in Table 4, we can conclude that:
(1)
The CDAL method we used still has the best results in dealing with short-term drift, and with a selected sample of 30, the average recognition accuracy can reach over 82%, which far exceeds the other methods. This indicated that our CDAL method has better identification and robustness than other methods and was very effective in dealing with the short-term drift of the E-nose.
(2)
AL-KLD, AL-HD, AL-JSD, and CDAL, as the methods using sample selection, achieved an average recognition accuracy of over 70%, which indicates that selecting an appropriate sample selection criterion for E-nose drift compensation can greatly improve recognition performance. This also shows that the method of obtaining a very small number of target domain labels is worthwhile and effective.
(3)
The recognition accuracy of CDSL as a migration learning method can also reach 75.58%, which indicates that in the process of drift compensation of the E-nose system, the difference in data distribution caused by drift needs to be solved, which also provides a reference for our CDAL model.
For short-term drift, we still optimized the SVM parameters c and γ and the weight parameter ω 1 . The specific optimized values are shown in Table 5.

4.5. Parameter Sensitivity Analysis

CDAL requires a weighting rule based on MMD and HD to select samples from the target domain for labeling, so the size of the number of labeled samples, N , can have an impact on the recognition results. It is necessary to observe the effect of the value of N on the effectiveness of the CDAL method. To prevent the effects of overfitting or a small number of markers, we varied the setting of N to {5, 10, 20, 30, 40, 50}. Figure 4 and Table 6 show the effect of the value of N on the long-term drift identification results, and Figure 5 and Table 7 show the effect of the value of N on the short-term drift results.
Based on the data in Table 6 and the visual reflection in Figure 4, we can see that as the number of tagged samples N increases, the long-term drift identification accuracy shows an increasing trend. After the number of picks N reaches 30, due to the small number of samples in some batches, any further increase in the number of markers at this point may cause an overfitting phenomenon, resulting in an insignificant increase in accuracy. Based on Table 7 and Figure 5, we can similarly conclude that the short-term drift compensation effect also becomes significantly better as N increases. Again, due to the small number of samples in some batches, the increase in accuracy is not significant when N reaches 30.

5. Conclusions

In this study, we used an approach based on active learning and domain adaptation to solve the sensor drift problem in the E-nose. We proposed a new cross-domain active learning framework based on HD and MMD, called CDAL. We discriminated the amount of information contained in the selected samples by using HD and measured the interdomain distribution differences of the selected samples by using MMD. Finally, we maximized the weighted sum of the two as the selection criterion for the target domain samples and used a pool-based QBC method to obtain the most representative labeled samples. The proposed CDAL method inherits the advantages of active learning, which can greatly improve the classification accuracy by consuming a small labeling cost; additionally, it is the first time that active learning has been combined with domain adaptation, which was also the focus of this study. The ability to introduce the domain adaptation framework into active learning, balancing the impact of interdomain distribution differences on labeled samples, is an important contribution, providing new ideas for future active learning and domain adaptation.

Author Contributions

Conceptualization, F.S. and R.S.; methodology, F.S.; software, F.S. and R.S.; validation, F.S. and R.S.; formal analysis, F.S.; investigation, F.S.; resources, F.S.; data curation, J.Y.; writing—original draft preparation, F.S.; writing—review and editing, J.Y.; visualization, F.S.; supervision, J.Y.; project administration, J.Y.; funding acquisition, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62176220, Project S202210635121 supported by Chongqing Municipal Training Program of Innovation and Entrepreneurship for Undergraduates.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data is available online at: http://archive.ics.uci.edu/ml/datasets/Gas+Sensor+Array+Drift+Dataset (accessed on 15 June 2022).

Acknowledgments

The authors would like to thank A. Vergara, who provided artificial olfactory data leveraged in this paper, and T. Liu, who provided technical support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chilo, J.; Pelegri-Sebastia, J.; Cupane, M.; Sogorb, T. E-Nose Application to Food Industry Production. IEEE Instrum. Meas. Mag. 2016, 19, 27–33. [Google Scholar] [CrossRef]
  2. Bieganowski, A.; Józefaciuk, G.; Bandura, L.; Guz, Ł.; Łagód, G.; Franus, W. Evaluation of Hydrocarbon Soil Pollution Using E-Nose. Sensors 2018, 18, 2463. [Google Scholar] [CrossRef] [Green Version]
  3. Goor, R.; Hooren, M.; Dingemans, A.-M.; Kremer, B.; Kross, K. Training and Validating a Portable Electronic Nose for Lung Cancer Screening. J. Thorac. Oncol. 2018, 13, 676–681. [Google Scholar] [CrossRef] [Green Version]
  4. Yan, J.; Duan, S.; Huang, T.; Wang, L. Hybrid Feature Matrix Construction and Feature Selection Optimization-Based Multi-Objective QPSO for Electronic Nose in Wound Infection Detection. Sens. Rev. 2016, 36, 23–33. [Google Scholar] [CrossRef]
  5. Zhang, L.; Liu, Y.; He, Z.; Liu, J.; Deng, P.; Zhou, X. Anti-Drift in E-Nose: A Subspace Projection Approach with Drift Reduction. Sens. Actuators B Chem. 2017, 253, 407–417. [Google Scholar] [CrossRef]
  6. Zuppa, M. Drift Counteraction with Multiple Self-Organising Maps for an Electronic Nose. Sens. Actuators B Chem. 2004, 98, 305–317. [Google Scholar] [CrossRef]
  7. Haugen, J.-E.; Tomic, O.; Kvaal, K. A Calibration Method for Handling the Temporal Drift of Solid State Gas-Sensors. Anal. Chim. Acta 2000, 407, 23–39. [Google Scholar] [CrossRef]
  8. Yan, K.; Zhang, D. Improving the Transfer Ability of Prediction Models for Electronic Noses. Sens. Actuators B Chem. 2015, 220, 115–124. [Google Scholar] [CrossRef]
  9. Tomic, O.; Ulmer, H.; Haugen, J.-E. Standardization Methods for Handling Instrument Related Signal Shift in Gas-Sensor Array Measurement Data. Anal. Chim. Acta 2002, 472, 99–111. [Google Scholar] [CrossRef]
  10. Kermit, M.; Tomic, O. Independent Component Analysis Applied on Gas Sensor Array Measurement Data. IEEE Sens. J. 2003, 3, 218–228. [Google Scholar] [CrossRef]
  11. Artursson, T.; Eklöv, T.; Lundström, I.; Mårtensson, P.; Sjöström, M.; Holmberg, M. Drift Correction for Gas Sensors Using Multivariate Methods. J. Chemom. 2000, 14, 711–723. [Google Scholar] [CrossRef]
  12. Ziyatdinov, A.; Marco, S.; Chaudry, A.; Persaud, K.; Caminal, P.; Perera, A. Drift Compensation of Gas Sensor Array Data by Common Principal Component Analysis. Sens. Actuators B Chem. 2010, 146, 460–465. [Google Scholar] [CrossRef] [Green Version]
  13. Padilla, M.; Perera, A.; Montoliu, I.; Chaudry, A.; Persaud, K.; Marco, S. Drift Compensation of Gas Sensor Array Data by Orthogonal Signal Correction. Chemom. Intell. Lab. Syst. 2010, 100, 28–35. [Google Scholar] [CrossRef]
  14. Tao, Y.; Xu, J.; Liang, Z.; Xiong, L.; Yang, H. Domain Correction Based on Kernel Transformation for Drift Compensation in the E-Nose System. Sensors 2018, 18, 3209. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, L.; Liu, Y.; Deng, P. Odor Recognition in Multiple E-Nose Systems with Cross-Domain Discriminative Subspace Learning. IEEE Trans. Instrum. Meas. 2017, 66, 1679–1692. [Google Scholar] [CrossRef]
  16. Tian, Y.; Yan, J.; Yi, D.; Zhang, Y.; Wang, Z.; Yu, T.; Peng, X.; Duan, S. Local Manifold Embedding Cross-Domain Subspace Learning for Drift Compensation of Electronic Nose Data. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
  17. Wang, Z.; Yan, J.; Chen, F.; Peng, X.; Zhang, Y.; Wang, Z.; Duan, S. Sensor Drift Compensation of E-Nose Systems with Discriminative Domain Reconstruction Based on an Extreme Learning Machine. IEEE Sens. J. 2021, 21, 17144–17153. [Google Scholar] [CrossRef]
  18. Yi, R.; Yan, J.; Shi, D.; Tian, Y.; Chen, F.; Wang, Z.; Duan, S. Improving the Performance of Drifted/Shifted Electronic Nose Systems by Cross-Domain Transfer Using Common Transfer Samples. Sens. Actuators B Chem. 2021, 329, 129162. [Google Scholar] [CrossRef]
  19. Yan, J.; Chen, F.; Liu, T.; Zhang, Y.; Peng, X.; Yi, D.; Duan, S. Subspace Alignment Based on an Extreme Learning Machine for Electronic Nose Drift Compensation. Knowl.-Based Syst. 2022, 235, 107664. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Xiang, S.; Wang, Z.; Peng, X.; Tian, Y.; Duan, S.; Yan, J. TDACNN: Target-Domain-Free Domain Adaptation Convolutional Neural Network for Drift Compensation in Gas Sensors. Sens. Actuators B Chem. 2022, 361, 131739. [Google Scholar] [CrossRef]
  21. Zhang, L.; Zhang, D. Domain Adaptation Extreme Learning Machines for Drift Compensation in E-Nose Systems. IEEE Trans. Instrum. Meas. 2015, 64, 1790–1801. [Google Scholar]
  22. Ma, Z.; Luo, G.; Qin, K.; Wang, N.; Niu, W. Weighted Domain Transfer Extreme Learning Machine and Its Online Version for Gas Sensor Drift Compensation in E-Nose Systems. Wirel. Commun. Mob. Comput. 2018, 2018, 2308237. [Google Scholar] [CrossRef] [Green Version]
  23. Tian, Y.; Yan, J.; Zhang, Y.; Yu, T.; Wang, P.; Shi, D.; Duan, S. A Drift-Compensating Novel Deep Belief Classification Network to Improve Gas Recognition of Electronic Noses. IEEE Access 2020, 8, 121385–121397. [Google Scholar] [CrossRef]
  24. Vergara, A.; Vembu, S.; Ayhan, T.; Ryan, M.A.; Homer, M.L.; Huerta, R. Chemical Gas Sensor Drift Compensation Using Classifier Ensembles. Sens. Actuators B Chem. 2012, 166–167, 320–329. [Google Scholar]
  25. Magna, G.; Martinelli, E.; D’Amico, A.; Di Natale, C. An Ensemble of Adaptive Classifiers for Improving Faulty and Drifting Sensor Performance. Procedia Eng. 2012, 47, 1275–1278. [Google Scholar] [CrossRef] [Green Version]
  26. Liu, H.; Tang, Z. Metal Oxide Gas Sensor Drift Compensation Using a Dynamic Classifier Ensemble Based on Fitting. Sensors 2013, 13, 9160–9173. [Google Scholar] [CrossRef] [Green Version]
  27. Verma, M.; Asmita, S.; Shukla, K.K. A Regularized Ensemble of Classifiers for Sensor Drift Compensation. IEEE Sens. J. 2016, 16, 1310–1318. [Google Scholar] [CrossRef]
  28. Zhao, X.; Li, P.; Xiao, K.; Meng, X.; Han, L.; Yu, C. Sensor Drift Compensation Based on the Improved LSTM and SVM Multi-Class Ensemble Learning Models. Sensors 2019, 19, 3844. [Google Scholar]
  29. Rehman, A.U.; Belhaouari, S.B.; Ijaz, M.; Bermak, A.; Hamdi, M. Multi-Classifier Tree with Transient Features for Drift Compensation in Electronic Nose. IEEE Sens. J. 2021, 21, 6564–6574. [Google Scholar] [CrossRef]
  30. Liu, T.; Li, D.; Chen, J.; Chen, Y.; Yang, T.; Cao, J. Active Learning on Dynamic Clustering for Drift Compensation in an Electronic Nose System. Sensors 2019, 19, 3601. [Google Scholar]
  31. Liu, T.; Li, D.; Chen, Y.; Wu, M.; Yang, T.; Cao, J. Online Drift Compensation by Adaptive Active Learning on Mixed Kernel for Electronic Noses. Sens. Actuators B Chem. 2020, 316, 128065. [Google Scholar] [CrossRef]
  32. Li, Q.; Wu, P.; Liang, Z.; Tao, Y. Research on Electronic Nose Drift Suppression Algorithm Based on Classifier Integration and Active Learning. In Proceedings of the 2021 13th International Conference on Communication Software and Networks (ICCSN), Chongqing, China, 4–7 June 2021; pp. 277–282. [Google Scholar]
  33. Liu, T.; Li, D.; Chen, J. An Active Method of Online Drift-Calibration-Sample Formation for an Electronic Nose. Measurement 2021, 171, 108748. [Google Scholar] [CrossRef]
  34. Cao, J.; Liu, T.; Chen, J.; Yang, T.; Zhu, X.; Wang, H. Drift Compensation on Massive Online Electronic-Nose Responses. Chemosensors 2021, 9, 78. [Google Scholar] [CrossRef]
  35. Fernandez, L.; Guney, S.; Gutierrez-Galvez, A.; Marco, S. Calibration Transfer in Temperature Modulated Gas Sensor Arrays. Sens. Actuators B Chem. 2016, 231, 276–284. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Schematic diagram of the CDAL framework. The cross-domain active learning method using weighted HD and MMD selects samples to update the classification model, which enables more representative labeled samples.
Figure 1. Schematic diagram of the CDAL framework. The cross-domain active learning method using weighted HD and MMD selects samples to update the classification model, which enables more representative labeled samples.
Micromachines 13 01260 g001
Figure 2. Active Learning Framework.
Figure 2. Active Learning Framework.
Micromachines 13 01260 g002
Figure 3. Principal component analysis of ten batches of data. The drift of the data over time batches is evident from the graphs.
Figure 3. Principal component analysis of ten batches of data. The drift of the data over time batches is evident from the graphs.
Micromachines 13 01260 g003
Figure 4. Effect of N on long-term drift.
Figure 4. Effect of N on long-term drift.
Micromachines 13 01260 g004
Figure 5. Effect of N on short-term drift.
Figure 5. Effect of N on short-term drift.
Micromachines 13 01260 g005
Table 1. Brief of drift dataset.
Table 1. Brief of drift dataset.
Batch IDMonthAcetoneAcetaldehydeEthanolEthyleneAmmoniaTolueneTotal
Batch 11–29098445307074445
Batch 23–10164334124410953251244
Batch 311–1336549015862402750586
Batch 414–15644316130120161
Batch 516284019746630197
Batch 617–205145742300296064672300
Batch 72164966236137446305683613
Batch 822–2330302943314318294
Batch 924,3061554707578101470
Batch 10366006006006006006003600
Table 2. Comparison of recognition accuracy in long-term drift (%).
Table 2. Comparison of recognition accuracy in long-term drift (%).
Method1–21–31–41–51–61–71–81–91–10Average
SVM47.9957.5765.2232.9945.0935.5724.8340.2131.1942.30
ELM69.1346.2232.3046.1944.9135.3725.5133.1937.1941.11
CCPCA77.6567.9165.8469.5472.0454.5865.3165.1137.1463.90
OSC79.7435.2548.4552.2834.3043.8449.6645.3222.8345.74
LDA70.9073.5863.3559.9063.5755.5867.6947.2343.2260.56
DS42.7730.9039.1348.2226.3519.9648.6423.1927.9434.12
GLSW72.6742.3770.1952.7949.7843.1857.4841.9137.4751.98
DRCA64.3183.3580.7574.6255.0442.3748.6440.0039.3958.72
CDSL79.1882.8580.7576.1471.7856.1074.4964.6840.3169.59
AL-KLD83.6374.4062.7585.6371.4553.8071.7832.6154.6265.63
AL-JSD75.8274.4262.1083.6069.8252.0072.5045.0051.8365.23
AL-HD87.1671.6162.6085.1868.3250.0870.8845.4547.6265.43
CDAL88.6387.3489.3191.6281.0663.7778.4162.0554.6877.43
Table 3. Long-term drift parameter optimization.
Table 3. Long-term drift parameter optimization.
ParametersBatch 2Batch 3Batch 4Batch 5Batch 6Batch 7Batch 8Batch 9Batch 10
Long-term drift c 10 0 10 3 10 3 10 10 10 15 10 3 10 6 10 6 10 0
γ 10 1 10 1 10 3 10 6 10 8 10 1 10 5 10 7 10 2
ω 1 0.920.980.880.960.860.160.780.130.71
Table 4. Comparison of recognition accuracy in short-term drift (%).
Table 4. Comparison of recognition accuracy in short-term drift (%).
Method1–22–33–44–55–66–77–88–99–10Average
SVM47.9960.0371.458.3854.6957.8269.7327.0233.5653.40
ELM69.1363.6863.9859.9047.1356.0269.3926.8128.6953.86
CCPCA77.6567.1557.1455.3353.2655.4775.5177.4526.1460.57
OSC79.9473.6470.1951.7856.2253.6748.6461.2828.8958.23
LDA70.9046.7882.6169.0473.0956.3585.7177.2316.6764.26
DS42.7743.6947.8321.3228.9127.3548.6416.6035.5834.74
GLSW72.6766.0843.4823.3527.5233.6348.6468.9430.5846.10
DRCA64.3166.2795.0347.2154.9668.9284.6972.5525.2564.35
CDSL79.1877.2497.5265.9974.1386.4489.4677.0234.1175.58
AL-KLD83.6387.6893.7470.0677.4385.3475.7627.0250.2672.33
AL-JSD88.9693.7091.6068.8671.0182.1970.4574.7745.5276.34
AL-HD86.0587.5288.1770.6659.5283.0273.8670.5736.1572.84
CDAL88.6396.3498.4772.4689.9687.5279.9281.3651.9982.96
Table 5. Short-term drift parameter optimization.
Table 5. Short-term drift parameter optimization.
ParametersBatch2Batch3Batch4Batch5Batch6Batch7Batch8Batch9Batch10
Short-term drift c 10 0 10 8 10 6 10 6 10 2 10 10 10 9 10 9 10 7
γ 10 1 10 7 10 8 10 5 10 2 10 7 10 7 10 7 10 7
ω 1 0.920.980.980.400.100.980.970.620.30
Table 6. Classification accuracy with different values of N on Setting 1.
Table 6. Classification accuracy with different values of N on Setting 1.
N51020304050
Batch 282.5786.5587.0188.6390.7891.71
Batch 374.2674.8174.7187.3488.4991.01
Batch 471.7974.8377.3089.3191.7492.79
Batch 543.2374.3372.3291.6292.3695.24
Batch 661.8361.9266.2381.0690.6294.44
Batch 744.2957.0164.6063.7771.7376.14
Batch 869.2079.2380.2278.4184.6584.02
Batch 947.9643.7056.2262.0567.7376.67
Batch 1042.2951.2853.0754.6856.1858.39
Average59.7167.0770.1977.4381.5984.49
Table 7. Classification accuracy with different values of N on Setting 2.
Table 7. Classification accuracy with different values of N on Setting 2.
N51020304050
Batch 282.5786.5587.0188.6390.7891.71
Batch 387.5690.4292.4696.3498.9098.89
Batch 490.3894.0495.7498.4799.1798.20
Batch 565.6364.7168.3672.4670.7068.71
Batch 659.3072.4086.0189.9691.5093.56
Batch 784.1084.0784.0287.5291.0294.44
Batch 869.2074.6576.2879.9285.8392.62
Batch 968.1773.4878.4481.3686.9894.76
Batch 1040.1142.7648.4951.9955.1458.17
Average71.8975.9079.6582.9685.5687.90
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, F.; Sun, R.; Yan, J. Cross-Domain Active Learning for Electronic Nose Drift Compensation. Micromachines 2022, 13, 1260. https://doi.org/10.3390/mi13081260

AMA Style

Sun F, Sun R, Yan J. Cross-Domain Active Learning for Electronic Nose Drift Compensation. Micromachines. 2022; 13(8):1260. https://doi.org/10.3390/mi13081260

Chicago/Turabian Style

Sun, Fangyu, Ruihong Sun, and Jia Yan. 2022. "Cross-Domain Active Learning for Electronic Nose Drift Compensation" Micromachines 13, no. 8: 1260. https://doi.org/10.3390/mi13081260

APA Style

Sun, F., Sun, R., & Yan, J. (2022). Cross-Domain Active Learning for Electronic Nose Drift Compensation. Micromachines, 13(8), 1260. https://doi.org/10.3390/mi13081260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop