Next Article in Journal
Antifouling Strategies for Sensors Used in Water Monitoring: Review and Future Perspectives
Previous Article in Journal
Fast-Response Colorimetric UVC Sensor Made of a Ga2O3 Photocatalyst with a Hole Scavenger
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

An Odor Labeling Convolutional Encoder–Decoder for Odor Sensing in Machine Olfaction

Department of Electromechanical Engineering, Guangdong University of Technology, 100, Waihuan Rd. W., Guangzhou Higher Education Mega Center, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(2), 388; https://doi.org/10.3390/s21020388
Submission received: 25 November 2020 / Revised: 30 December 2020 / Accepted: 5 January 2021 / Published: 8 January 2021
(This article belongs to the Section Electronic Sensors)

Abstract

:
Deep learning methods have been widely applied to visual and acoustic technology. In this paper, we propose an odor labeling convolutional encoder–decoder (OLCE) for odor identification in machine olfaction. OLCE composes a convolutional neural network encoder and decoder where the encoder output is constrained to odor labels. An electronic nose was used for the data collection of gas responses followed by a normative experimental procedure. Several evaluation indexes were calculated to evaluate the algorithm effectiveness: accuracy 92.57 % , precision 92.29 % , recall rate 92.06 % , F1-Score 91.96 % , and Kappa coefficient 90.76 % . We also compared the model with some algorithms used in machine olfaction. The comparison result demonstrated that OLCE had the best performance among these algorithms.

1. Introduction

Machine olfaction is an advanced technology that captures odorous materials and identifies them by distinguishing the differences in response patterns. Usually, electronic noses (e-noses) are used, which consist of an array of gas sensors and intelligent identification algorithms mimicking biological noses, to ‘smell’ and ‘sense’ odors [1,2].
Gas sensors typically detect gases by measuring the change in electrical conductivity. Sensitivity, selectivity, response time, and recovery time are the major specifications to evaluate the performance of a gas sensor [3]. There are different types of gas sensors: catalytic combustion, electrochemical, thermal-conductive, infrared absorption, paramagnetic, solid electrolyte, and metal oxide semiconductor sensors [3]. In recent years, paper-based sensors, which are a new type of gas sensor fabricated by cellulose paper, have the characteristics of flexibility, tailorability, being low-cost, lightweight, and environmentally friendly [4]. The response of a gas sensor detecting an odor is a synthetic process since the sensor may be sensitive to a group of different molecules, which is usually called ‘cross-sensitivity’. Cross-sensitivity is a characteristic of gas sensors that arises because of poor selectivity [5]. It is an issue when measuring the gas concentration using a single gas sensor. However, it can be utilized as a feature to identify odors when an array of gas sensors is used. Response patterns of sensor signals are different from various odors. It is difficult to interpret sensing responses due to the synthetically non-linear sensing process of gas sensors. Most gas sensors are fabricated for detecting industrial gases or volatile organic chemicals (VOCs).
Developments in odor identifications have progressed in recent years and have been applied to specific fields. However, such methods ignore the essence of odors. An odor is usually composed of a group of odorous compounds. We human beings sniff the odorous mixture, discriminate, and identify the odor if people are trained to recognize the odor. We have difficulties describing an unknown odor without prior knowledge. Instead, we describe it by using some semantic words. Accordingly, is there a method to describe the odor space so that odor can be recorded and encoded in some general forms?
It is a challenge to determine the dimensionality of the olfactory perceptual space because there are still a lot of efforts required in the investigation of the mechanism of olfactory perceptions. Physiological studies had identified that the human olfactory system consists of around 400 odorant receptor types [6]. An odor activates some of these odorant receptor types to generate a specific pattern so that humans can discriminate against it. The number of odorant receptor types sets the upper bound on the dimensionality of the perceptual space. There is no dedicated vocabulary to describe odors in major languages. Instead, words about objects, for example, flowers and animals, or emotions such as pleasantness are applied to describe olfactory perceptions. J. E. Amoore claimed that odors were divided into seven groups which were regarded as primary odors [7]. Markus Meister suggested that olfactory perceptual space may contain around 20 dimensions or less [8], and Yaara and Noam reviewed that humans are good at odor detection and discrimination, but are poor at odor identification and naming [9]. Semantic descriptors profiled from a list of defined verbal words are rated by human sniffers. Up to now, there is no universal list of odor semantic descriptors yet.
Currently, there is no odor space to describe the variety of odors in nature. Some studies revealed a significant relationship between odor molecular structure information and olfactory perceptions [10]. Functional groups and hydrocarbon structural features were considered to be factors influencing olfactory perceptions. A hypothesis demonstrated that odorants possessing the same functional groups activate the same glomerular modules [11] which generate similar perceptual patterns so that humans identify them as the same type of odor. Recent studies revealed that 3D structure information of odorous molecules has a more noticeable impact on olfactory perceptions [12]. Considering the complexity of molecular structure information, the mapping to odor space may be non-linear.
Several studies investigated the map between odor responses and odorous perceptual labels. T. Nakamoto designed an odor sensing system that consisted of a mass spectrum and large-scale neural networks to predict odor perceptual information [13]. R. Haddad et al. investigated the relationship between odor pleasantness and e-nose sensing responses by modeling a feed-forward back-propagation neural network [14]. D. Wu et al. designed a convolutional neural network for predicting odor pleasantness [15]. These models used in predicting odor perceptual descriptors perform decently in some particular datasets. However, machine percepts and describes odors using distributed representation is still a challenge for us.
It is worth establishing some forms of odor space to describe a sufficiently complete group of odors in nature. An odor space should be some form of numerical values with definite dimensionality. The odor space should be a linear space for convenient interpretation because of the non-linear map. Those semantic olfactory descriptors are only some points in the quantization, just as the color “red” is quantified to (255, 0, 0) in RGB color space. The importance of such odor space is a quantization form so that odors can be converted to information for data storage or transmission. The odors can be reproduced by blending some similar odorants to generate the odor.
Machine olfactions have been applied widely to many fields in recent years. Some linear methods such as principal component analysis (PCA), linear discriminant analysis (LDA), support vector machines (SVM), etc., were used in the analysis of odor discrimination [16]. PCA is an unsupervised method ignoring discriminant information, which is a popular method for dimensionality reduction [17]. LDA is a supervised method for classification by finding decision surfaces and calculating the signed orthogonal distance of data points. It has been used in the identification of Chinese herbal medicines [18]. SVM is a kind of regularization in which the aim is to find the maximum margin between classes. K. Brudzewski applied SVM as the classification tool for identifying tobacco [19]. Classifiers using linear methods can be transferred to convex problems which have the advantages of mathematical interpretability. Non-linear methods such as artificial neural networks (ANN) were also introduced in machine olfactions. In recent years, deep learning methods have been dramatically developing and widely used in various fields such as computer visions, speech processing, automatic driving, etc. They also have been introduced in machine olfaction for odor identification [15,20].
There are several advantages that mean that machine olfaction technology applies to many fields. Firstly, it is a non-destructive technique to detect volatiles released from the surface of objects [21]. Secondly, e-nose is usually portable, which is convenient to detect odors anywhere and anytime [22]. Thirdly, e-noses have the capacity of extending out olfactory perception scopes since gas sensors are capable of detecting those chemicals which humans are unable to smell and sense [23]. Furthermore, e-noses can be used in some unpleasant environments [24,25].
Linear methods for classifications usually require highly correlated features and high calibration costs, which limit the number of training data [16]. Non-linear methods have difficulties in interpretation. Nonetheless, non-linear methods, especially deep learning methods, have a higher capacity of odor identification. In this paper, we borrowed the idea from an auto-encoder and proposed a novel deep learning algorithm for odor identification—Odor Labeling Convolutional Encoder–Decoder (OLCE). OLCE consists of an encoder and a decoder, where the encoder output is constrained to odor labels. OLCE has a decoder structure which offers some clues on how the model learns features. In the following paragraphs, we will first describe the experimental setups and the modeling of OLCE. After that, the performance of the model, comparison with other methods, and an overview of decoded response results will be illustrated. Furthermore, the perspective of machine olfaction will be discussed.

2. Materials and Methods

2.1. Research Scheme

The OLCE model was built, trained, and tested by self-collecting gas response datasets. In the study, odors from seven non-crushed Chinese herbal medicines were collected by an electronic nose. We kept experimental settings of gas-response collections by a self-designed standard procedure to control the detection consistency and data effectiveness. We built other algorithms that had already been used for odor identification to examine the performance of OLCE. The experimental procedure is displayed in Figure 1.

2.2. Experiment Setup

The instrument and tools used in the experiment included a PEN-3 electronic nose, beakers, and a computer. Experimental subjects were placed in beakers for data collection. The PEN-3 electronic nose manufactured by AirSense Inc. was used for collecting gas sensor responses. The computer with installed with Winmuster, which is the PEN-3 e-nose control software designed by AIRSENSE Analytics Inc., Schwerin, Gemany, which was used to connect and control the e-nose. The architecture of the experimental setup is displayed in Figure 2.

2.3. The Preparation of Experimental Materials

We selected seven Chinese herbal medicines (Betel Nut, Dried Ginger, Rhizoma Alpiniae Officinarum, Tree Peony Bark, Fructus Amomi, Rhioxma Curcumae Aeruginosae, Fructus Aurantii) for the experiment. To ensure the consistency of gas sensor responses, the procedures for preparing these materials were carefully set as follows:
  • Materials in initial conditions are placed in clean beakers separately.
  • Beakers are equilibrated for over 20 min colorredfor enrichment of volatiles released from the surface of medicines.
  • The temperature is kept around 25 °C.
  • The humidity is kept around 75%.

2.4. PEN-3 Electronic Nose

Response data were collected from an e-nose, PEN-3, AirSense Inc. The PEN-3 e-nose is a general-purpose gas response signal sampling instrument with 10 metal-oxide gas sensors, each of which has different sensitivity to different gases, as shown in Table 1. With the combination of these 10 sensors, PEN-3 has the ability to sense various gases, which makes it a suitable instrument for this research. The settings of the e-nose are illustrated in Table 2.

2.5. OLCE Modeling

Figure 3 describes the principle of OLCE. OLCE contains a convolutional encoder and a convolutional decoder. The OLCE input is those responses that have been zero-center normalized. The OLCE output aims to reproduce the input. The intermediate layer is a representation that outputs the identification results. The encoder and decoder are trained together using a training dataset. To verify the model, the results in the representation layer are used to evaluate the performance of the model.
The original response data were firstly zero-center normalized, then sent into the OLCE model. The i-th zero-center normalized response data point x i is calculated as follows:
x i = x i x m e a n x m a x x m i n ,
where x i is the i-th original data from e-nose, x m e a n is the average value of 120 data points collected from a gas sensor, and x m a x and x m i n are the maximum and the minimum value of the 120 data points, respectively.
Suppose the input is X , which here is the sensing response collected by gas sensors. The labels of Chinese herbal medicines are defined as y , which is one-hot encoding. The encoder is defined as F ( ) and the decoder is defined as G ( ) . Therefore, the encoder can be presented as
y = F ( X ) ,
and the decoder can be presented as
X = G ( y ) ,
where X is the output of the decoder. The aim of building the encoder–decoder is to gain an accurate labeling of results y . To achieve this, the re-build responses X must approximate to the original responses X : X X . In other words, the aim of the encoder–decoder can be illustrated as follows:
m i n i m i z e y Y s u b j e c t t o G ( F ( X ) ) X .
The encoder was designed with a convolutional neural network. The convolutional layer extract features by computing the product sum of the input variables. R e L U was used to introduce non-linearity in the convolutional network.
R e L U ( x ) = m a x ( 0 , x ) .
A max-pooling layer was introduced to reduce spatial size of the convolved data. After that, a fully connected layer was introduced to learn non-linear combinations of the high-level features. Softmax was implemented through the output layer as a classifier to identify odor labels. Symmetrically, the decoder was a convolutional neural network with the same structure. Figure 4 describes the architecture of the encoder and decoder, while Table 3 shows the network parameters.

2.6. Comparison Models

In order to take a view on the performance of OLCE, several algorithms that had been applied to machine olfactions were selected for comparison.
  • linear discriminant analysis (LDA) [26],
  • multi-layer perception (MLP) [27],
  • decision tree (DT) [28],
  • principle component analysis (PCA) with LDA [29],
  • convolutional neural networks (CNN) and support vector machine (SVM) [30].
LDA can be used not only for dimensionality reduction but also for classification. LDA reduces in-class distances and increases the distances between classes.
An MLP classifier is an artificial neural network and has been applied to odor identification. MLP is a supervised non-linear function approximator that learns a function f ( ) : R m R n , where m = 1200 was a 120 × 10 sample and n = 7 is the labels. The MLP consisted of 4 hidden layers with the ReLU activation function.
DT is a non-parametric supervised learner which classifies data based on already-known sample distribution probability. It performed decently in odor classification. We here set the classification criterion to Gini,
H ( X m ) = k p m k ( 1 p m k ) ,
where X m is samples that used the node m. The proportion of class k in node m is p m k = 1 / N x i R I ( y i = k ) . It represents a region R with N observations. In order to prevent overfitting, the maximum depth of a tree was limited to 10.
PCA–LDA is a combination model and has been applied to odor identification. PCA implemented dimensionality reduction through orthogonally projecting input data onto a lower-dimensional linear space by singular value decomposition with scaling each component. LDA was implemented for the classification.
In the CNN–SVM model, CNN is a typical feed-forward neural network for feature extraction. SVM is a supervised learning algorithm for classification. The CNN consisted of 2 one-dimensional convolutional layers, fully max-pooling layers, and a fully-connected layer.
All models were coded in Python and open-source packages scikit-learn [31] and PyTorch [32] were used to build models.

3. Results

3.1. The Input of OLCE

The input of OLCE is a zero-centered 10 × 120 dataset, which is collected by PEN-3 e-nose. The gas sensor responses were collected by PEN-3 e-nose followed by the experimental procedure illustrated in the previous section. For each medicine, 100 response samples were measured so that the total number of samples in the dataset was 7 × 100 = 700 . Each sample was actually a 120 × 10 matrix.
Figure 5 compares the zero-centered responses to the input of OLCE. We randomly selected 4 samples from each medicine class. It can be seen that there are some slight differences in the same medicine class because of different within-class medicine sources used for the collection experiment. Responses have noticeable differences between classes. Some sensors show upwards baseline drift because various volatilization rates of some volatiles and their sensitivities to volatiles. Some sensors show downwards baseline drift because of the overflow in the sensor chamber. Since an OLCE receives a 10 × 120 sample as an input dataset without feature extraction, these drifts can be ignored.

3.2. OLCE Evaluation

The OLCE model was executed 10 times and several performance evaluation indexes (accuracy, precision, recall rate, F-score, Kappa rate, and Hamming loss) were calculated to view the model effectiveness. Each OLCE model was trained 200 epoches. Figures S1 and S2 show the accuracy and loss rate of the best OLCE model. Figures S3 and S4 show the average accuracy and loss rate of the best 5 OLCE models. The results were displayed in Table 4.
OLCE had the maximum accuracy of 0.96 and minimum accuracy of 0.8457 . It had a decent precision rate (between 0.8397 and 0.9635 ) and recall rate ( 0.8419 and 0.9576 ). The F 1 score of the model was between 0.8379 and 0.9591 . It demonstrated that OLCE had a lower false positive and false negative predicting output.
Kappa coefficient was also calculated to evaluate the consistency and classifier precision.
Kappa = P o P e 1 P e ,
where P o is the accuracy and P e is calculated as follows:
P e = a 1 b 1 + a 2 b 2 + + a 7 b 7 n n ,
where i = 1 , 2 , , 7 is the class index, a i represents the accumulated amount of samples of each class in the dataset, b i represents the accumulated number of samples in each class after the classification, and n is the total number of samples. The Kappa revealed that OLCE has excellent consistency.

3.3. Comparison

Several algorithms used in machine olfactions were built to compare OLCE. Each model was executed 10 times, and the accuracy scores of each model are illustrated in Table 5. It can be seen that the highest and lowest accuracy of LDA was 0.9314 and 0.8686 . CNN–SVM has the highest and lowest scores of 0.9371 and 0.8514 . MLP and PCA–LDA has relatively lower scores which the best scores were 0.4342 and 0.5200 , respectively. The decision tree yielded relatively good scores between 0.7600 and 0.8857 . It can be seen from the ‘Max.’ and ‘Min.’ columns that OLCE has the best scores (the highest score was 0.9714 and the lowest was 0.8800 ). Moreover, OLCE had the best average score ( 0.9257 ). The ‘Var.’ column describes the variances of the accuracy scores from the 10 models of each algorithm. It can be seen that LDA had the highest consistency because it had the lowest accuracy variance ( 0.0005 ) between 10 LDA models. On the contrary, the PCA–LDA model obtained the highest variance of 0.0141 , which reveals the worst training consistency. It can be noted that OLCE had the third-lowest accuracy variance that was 0.0009 .
Overall, as the results illustrate above, LDA, CNN–SVM, and OLCE had a decent performance for machine olfaction according to better average predicting accuracy and stable consistency. Moreover, considering the comprehensive advantages in accuracy, precision, recall rate, F 1 score, and Kappa coefficients, OLCE was suitable to discriminate odors from gas responses collected by e-nose.

3.4. Overview of Decoded Responses

OLCE is an encoder–decoder structure model, and the representation layer consists of several odor labels. It is interesting to take a view on decoded responses. We randomly selected one original response and one decoded response from both the training set and test set.
Figure 6a,b shows the comparison of encoder input and decoder output. Firstly, OLCE reproduces response signals in the response state. Some small response changes in the response states are decoded as some fluctuating signals. For instance, in Figure 6a, row 4, some baseline drifts are decoded as some fluctuating signals. Secondly, OLCE focuses on positive or negative baseline drift. For example, in Figure 6b, row 5, when the signal changes accumulate exceed a certain level, the decoder generates some fluctuations. It is possible that some response changes may activate OLCE to generate fluctuating waves. These fluctuations can be regarded as ‘feature stamps’. These feature stamps reveal some clues of which features OLCE focuses on. Furthermore, the model ignores response fluctuations from one single gas sensor. For instance, as shown in Figure 6a, the “W3S” response (brown line) in subfigure “Fructus Amomi” (row 5, column 1) fluctuates obviously, but the decoder did not take it as a feature.
Figure 6c describes a typical decoded response. It can be seen that OLCE learns features using one or more small windows in a response dataset. The response state is the most significant feature for OLCE, as shown in the red dotted box. Moreover, the OLCE regards some gentle changes in steady state as some features. The intersections of curves can also be some significant features, as shown the green dotted box. OLCE may also concentrate on changes accumulated in the steady state, as shown in the blue dotted box.

4. Discussion

Using an e-nose to identify odor is a process of detecting and discriminating those ingredients that gas sensors are sensitive to. It is different from other measuring instruments such as GC–MS that have the capacity of identifying ingredients of an odor. An E-nose with an array of gas sensors and a suitable identification algorithm mimics human olfaction to identify odors, which can be applied to many fields where fast detections are required because it has advantages of portability, easy-to-design, and low-cost. Hence, a reliable algorithm to discriminate various response patterns is necessary.
OLCE has an elegant and symmetrical structure using a convolutional neural network, which makes it easy to build the model. The experimental results show that OLCE performs decently in odor identification of Chinese herbal medicines according to several performance indexes. It may also suggest that OLCE can be used in other odor identifications. The OLCE encoder encodes sensor responses to odor labels using a convolutional neural network. The OLCE decoder reproduces sensor responses using a convolutional neural network with a symmetrical structure. The reproduced responses on the decoder side reveal some clues on which features OLCE focuses on. The one-hot encoding labels in the representation layer, the intermediate layer, make the classification more robust than categorical encoding because of the mutual exclusivity of the encoding bits.
OLCE is a multi-class classifier that uses one-hot encoding codes to output the identification results. Multi-class classifiers are suitable to be used in the scenario where the identification category is mutually exclusive. The other type is the multi-label classifier that an instance may belong to more than one class. It is interesting to consider that the one-hot encoding labels in the representation layer of OLCE can be replaced by binary encoding labels so that the model can be used as a multi-label classifier.

5. Conclusions

In this paper, we proposed a novel Odor Labeling Convolutional Encoder–Decoder (OLCE) for odor identification. OLCE is an encoder–decoder structure using convolutional neural network where the representation layer, the intermediate layer, is constrained to odor labels. To evaluate the effectiveness of the model, several performance evaluation indexes (accuracy, precision, recall rate, F1-score, and Kappa coefficient) were calculated. We also built some common algorithms used in odor identifications to compare the performance. Results demonstrated that OLCE had a decent performance according to the performance evaluation indexes. OLCE has the highest average accuracy score ( 0.9257 ) and better consistency in training models among these algorithms.

Supplementary Materials

The following are available online at https://www.mdpi.com/1424-8220/21/2/388/s1. Figure S1: The accuracy of the top1 model in training. Figure S2: The loss rate of top1 model in training. Figure S3: The average accuracy of top five models in training. Figure S4: The average loss rate of top five models in training..

Author Contributions

Conceptualization, T.W. and D.L.; methodology, T.W.; software, J.L.; validation, T.W., Z.M., and Q.L.; formal analysis, T.W.; investigation, Z.M.; resources, J.L.; data curation, Q.L.; writing–original draft preparation, T.W.; writing–review and editing, T.W.; visualization, J.L.; supervision, D.L.; project administration, L.W.; funding acquisition, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

The work was funded by National Natural Science Foundation of China grant number 61705045; National Natural Science Foundation of China grant number 61571140; Guangdong Science and Technology Department grant number 2019B101001017.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
OLCEOdor Labeling Convolutional Encoder-decoder
LDALinear Discriminant Analysis
MLPMulti-Layer Perception
DTDecision Tree
PCAPrinciple Component Analysis
CNNConvolutional Neural Networks
SVMSupport Vector Machine
BNBetel Nut
DGDried Ginger
RAORhizoma Alpiniae Officinarum
TPBTree Peony Bark
FAmFructus Amomi
RCARhioxma Curcumae Aeruginosae
FAuFructus Aurantii

References

  1. Gardner, J.W.; Bartlett, P.N. A brief history of electronic noses. Sens. Actuators B Chem. 1994, 18, 211–220. [Google Scholar] [CrossRef]
  2. Wasilewski, T.; Migoń, D.; Gębicki, J.; Kamysz, W. Critical review of electronic nose and tongue instruments prospects in pharmaceutical analysis. Anal. Chim. Acta 2019, 1077, 14–29. [Google Scholar] [CrossRef] [PubMed]
  3. Dey, A. Semiconductor metal oxide gas sensors: A review. Mater. Sci. Eng. B 2018, 229, 206–217. [Google Scholar] [CrossRef]
  4. Tai, H.; Duan, Z.; Wang, Y.; Wang, S.; Jiang, Y. Paper-Based Sensors for Gas, Humidity, and Strain Detections: A Review. ACS Appl. Mater. Interfaces 2020, 12, 31037–31053. [Google Scholar] [CrossRef]
  5. Feng, S.; Farha, F.; Li, Q.; Wan, Y.; Xu, Y.; Zhang, T.; Ning, H. Review on Smart Gas Sensing Technology. Sensors 2019, 19, 3760. [Google Scholar] [CrossRef] [Green Version]
  6. Malnic, B.; Godfrey, P.A.; Buck, L.B. The human olfactory receptor gene family. Proc. Natl. Acad. Sci. USA 2004, 101, 2584–2589. [Google Scholar] [CrossRef] [Green Version]
  7. Amoore, J.E. Specific anosmia and the concept of primay odors. Chem. Senses 1977, 2, 267–281. [Google Scholar] [CrossRef]
  8. Meister, M. On the dimensionality of odor space. eLife 2015, 4, e07865. [Google Scholar] [CrossRef]
  9. Yeshurun, Y.; Sobel, N. An odor is not worth a thousand words: From multidimensional odors to unidimensional odor objects. Annu. Rev. Psychol. 2010, 61, 219–241. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Rossiter, K.J. Structure-Odor Relstionships. Chem. Rev. 1996, 96, 3201–3240. [Google Scholar] [CrossRef] [PubMed]
  11. Johnson, B.A.; Ho, S.L.; Xu, Z.; Yihan, J.S.; Yip, S.; Hingco, E.E.; Leon, M. Functional mapping of the rat olfactory bulb using diverse odorants reveals modular responses to functional groups and hydrocarbon structural features. J. Comp. Neurol. 2002, 449, 180–194. [Google Scholar] [CrossRef] [PubMed]
  12. Rojas, C.; Duchowicz, P.R.; Tripaldi, P.; Diez, R.P. QSPR analysis for the retention index of flavors and fragrances on a OV-101 column. Chemom. Intell. Lab. Syst. 2015, 140, 126–132. [Google Scholar] [CrossRef]
  13. Nakamoto, T. Odor sensing system with multi-dimensional data analysis. Jpn. J. Appl. Phys. 2019, 58, SB0804. [Google Scholar] [CrossRef]
  14. Haddad, R.; Medhanie, A.; Roth, Y.; Harel, D.; Sobel, N. Predicting odor pleasantness with an electronic nose. PLoS Comput. Biol. 2010, 6, e1000740. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Wu, D.; Luo, D.; Wong, K.Y.; Hung, K. POP-CNN: Predicting Odor Pleasantness With Convolutional Neural Network. IEEE Sens. J. 2019, 19, 11337–11345. [Google Scholar] [CrossRef]
  16. Marco, S.; Gutierrez-Galvez, A. Signal and Data Processing for Machine Olfaction and Chemical Sensing: A Review. IEEE Sens. J. 2012, 12, 3189–3214. [Google Scholar] [CrossRef]
  17. Bedoui, S.; Faleh, R.; Samet, H.; Kachouri, A. Electronic nose system and principal component analysis technique for gases identification. In Proceedings of the 10th International Multi-Conference on Systems, Signals & Devices (SSD), Hammamet, Tunisia, 18–21 March 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar] [CrossRef]
  18. Luo, D.H.; Shao, Y.W. Classification of Chinese Herbal Medicine Based on Improved LDA Algorithm Using Machine Olfaction. Appl. Mech. Mater. 2012, 239-240, 1532–1536. [Google Scholar] [CrossRef]
  19. Brudzewski, K.; Osowski, S.; Golembiecka, A. Differential electronic nose and support vector machine for fast recognition of tobacco. Expert Syst. Appl. 2012, 39, 9886–9891. [Google Scholar] [CrossRef]
  20. Jong, G.J.; Hendrick; Wang, Z.H.; Hsieh, K.S.; Horng, G.J. A Novel Feature Extraction Method an Electronic Nose for Aroma Classification. IEEE Sens. J. 2019, 19, 10796–10803. [Google Scholar] [CrossRef]
  21. Brezmes, J.; Fructuoso, M.; Llobet, E.; Vilanova, X.; Recasens, I.; Orts, J.; Saiz, G.; Correig, X. Evaluation of an electronic nose to assess fruit ripeness. IEEE Sens. J. 2005, 5, 97–108. [Google Scholar] [CrossRef] [Green Version]
  22. Das, A.; Dost, R.; Richardson, T.H.; Grell, M.; Wedge, D.C.; Kell, D.B.; Morrison, J.J.; Turner, M.L. Low cost, portable, fast multiparameter data acquisition system for organic transistor odour sensors. Sens. Actuators Chem. B 2009, 137, 586–591. [Google Scholar] [CrossRef]
  23. Deshmukh, S.; Bandyopadhyay, R.; Bhattacharyya, N.; Pandey, R.A.; Jana, A. Application of electronic nose for industrial odors and gaseous emissions measurement and monitoring—An overview. Talanta 2015, 144, 329–340. [Google Scholar] [CrossRef] [PubMed]
  24. Murphy, K.R.; Parcsi, G.; Stuetz, R.M. Non-methane volatile organic compounds predict odor emitted from five tunnel ventilated broiler sheds. Chemosphere 2014, 95, 423–432. [Google Scholar] [CrossRef] [PubMed]
  25. Li, H.; Luo, D.; Sun, Y.; GholamHosseini, H. Classification and Identification of Industrial Gases Based on Electronic Nose Technology. Sensors 2019, 19, 5033. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Akbar, M.A.; Ait Si Ali, A.; Amira, A.; Bensaali, F.; Benammar, M.; Hassan, M.; Bermak, A. An Empirical Study for PCA- and LDA-Based Feature Reduction for Gas Identification. IEEE Sens. J. 2016, 16, 5734–5746. [Google Scholar] [CrossRef]
  27. Benrekia, F.; Attari, M.; Bouhedda, M. Gas sensors characterization and multilayer perceptron (MLP) hardware implementation for gas identification using a Field Programmable Gate Array (FPGA). Sensors 2013, 13, 2967–2985. [Google Scholar] [CrossRef]
  28. Ait Si Ali, A.; Djelouat, H.; Amira, A.; Bensaali, F.; Benammar, M.; Bermak, A. Electronic nose system on the Zynq SoC platform. Microprocess. Microsyst. 2017, 53, 145–156. [Google Scholar] [CrossRef]
  29. Sun, Y.; Luo, D.; Li, H.; Zhu, C.; Xu, O.; Gholam Hosseini, H. Detecting and Identifying Industrial Gases by a Method Based on Olfactory Machine at Different Concentrations. J. Electr. Comput. Eng. 2018, 2018, 1–9. [Google Scholar] [CrossRef] [Green Version]
  30. Shi, Y.; Gong, F.; Wang, M.; Liu, J.; Wu, Y.; Men, H. A deep feature mining method of electronic nose sensor data for identifying beer olfactory information. J. Food Eng. 2019. [Google Scholar] [CrossRef]
  31. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  32. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library; Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2019; pp. 8024–8035. [Google Scholar]
Figure 1. Odor Labeling Convolutional Encoder–Decoder (OLCE) workflow.
Figure 1. Odor Labeling Convolutional Encoder–Decoder (OLCE) workflow.
Sensors 21 00388 g001
Figure 2. The architecture of the experimental setup for collecting odor response data. Chinese Herbal medicines were selected as the experimental materials, and they were placed in clean beakers covered by a sealed film. A needle was inserted into the bottom for clean-air refilling. Another needle was inserted on the roof of the beaker beneath the sealed film for collecting headspace gases. The clean-air inlet was connected to the purge gas port on the PEN-3 e-nose for flushing gas sensors, and the waste-air outlet was connected to the waste port for ejecting waste gases. Response data were collected by the e-nose and transmitted to a computer via a USB Type-B cable connecting e-nose and computer.
Figure 2. The architecture of the experimental setup for collecting odor response data. Chinese Herbal medicines were selected as the experimental materials, and they were placed in clean beakers covered by a sealed film. A needle was inserted into the bottom for clean-air refilling. Another needle was inserted on the roof of the beaker beneath the sealed film for collecting headspace gases. The clean-air inlet was connected to the purge gas port on the PEN-3 e-nose for flushing gas sensors, and the waste-air outlet was connected to the waste port for ejecting waste gases. Response data were collected by the e-nose and transmitted to a computer via a USB Type-B cable connecting e-nose and computer.
Sensors 21 00388 g002
Figure 3. The basic principle of OLCE.
Figure 3. The basic principle of OLCE.
Sensors 21 00388 g003
Figure 4. Architecture of the odor labeling convolutional encoder–decoder.
Figure 4. Architecture of the odor labeling convolutional encoder–decoder.
Sensors 21 00388 g004
Figure 5. Centralized sensing responses of seven types of Chinese herbal medicines (Betel Nut (BN), Dried Ginger (DG), Rhizoma Alpiniae Officinarum (RAO), Tree Peony Bark (TPB), Fructus Amomi (FAm), Rhioxma Curcumae Aeruginosae (RCA), Fructus Aurantii (FAu)). We randomly selected four response samples of each medicine. All response data were implemented by centralized normalization.
Figure 5. Centralized sensing responses of seven types of Chinese herbal medicines (Betel Nut (BN), Dried Ginger (DG), Rhizoma Alpiniae Officinarum (RAO), Tree Peony Bark (TPB), Fructus Amomi (FAm), Rhioxma Curcumae Aeruginosae (RCA), Fructus Aurantii (FAu)). We randomly selected four response samples of each medicine. All response data were implemented by centralized normalization.
Sensors 21 00388 g005
Figure 6. The original responses and decoded responses. In (a,b), the left column shows the input responses of OLCE encoder, and the right column describes the output responses of OLCE decoder. The subfigure (c) highlights the significant features extracted by decoder.
Figure 6. The original responses and decoded responses. In (a,b), the left column shows the input responses of OLCE encoder, and the right column describes the output responses of OLCE decoder. The subfigure (c) highlights the significant features extracted by decoder.
Sensors 21 00388 g006
Table 1. Descriptions of the sensor array in PEN-3 e-nose.
Table 1. Descriptions of the sensor array in PEN-3 e-nose.
SensorSensor Sensitivity and General Description
W1CAromatic compounds.
W5SVery sensitive, broad range of sensitivity, reacts to nitrogen oxides, very sensitive with negative signals.
W3CAmmonia, used as sensor for aromatic compounds.
W6SMainly hydrogen.
W5CAlkanes, aromatic compounds, less polar compounds.
W1SSensitive to methane. Broad range.
W1WReacts to sulphur compounds, H2S. Otherwise sensitive to many terpenes and sulphur-containing organic compounds.
W2SDetects alcohol, partially aromatic compounds, broad range.
W2WAromatic compounds, sulphur organic compounds.
W3SReacts to high concentrations (>100 mg/kg) of methane–aliphatic compounds.
Table 2. Settings of PEN-3 electronic nose.
Table 2. Settings of PEN-3 electronic nose.
OptionsSettings
Sample interval1.0 s
Presampling time5.0 s
Zero point trim time5.0 s
Measurement time120 s
Flushing time120 s
Chamber flow150 mL/min
Initial injection flow150 mL/min
Table 3. Structural parameters of the odor labeling convolutional encoder–decoder.
Table 3. Structural parameters of the odor labeling convolutional encoder–decoder.
LayerTypeFilter ShapeInput Size
Conv1conv7 × 1 × 510 × 1 × 120
Maxpool1 × 27 × 1 × 116
Conv2conv12 × 1 × 37 × 1 × 58
Maxpool1 × 212 × 1 × 56
FC3FC7 × 33612 × 1 × 28
ClassifierSoftmax-7
FC3FC336 × 77
Unpool1 × 212 × 1 × 28
Transposed Conv2Transposed conv27 × 1 × 312 × 1 × 56
Unpool1 × 27 × 1 × 58
Transposed Conv1Transposed conv110 × 1 × 57 × 1 × 116
Table 4. Performance evaluation indexes for OLCE model.
Table 4. Performance evaluation indexes for OLCE model.
No.AccuracyPrecisionRecallF1 ScoreKappa
10.91420.92690.92760.92490.9130
20.88000.96350.95760.95840.9533
30.94850.88580.86240.86910.8520
40.94280.93120.93470.93200.9197
50.97140.91630.91570.91290.8998
60.92000.93330.93540.93300.9196
70.89710.95990.95900.95910.9532
80.94280.83970.84190.83790.8193
90.94850.94040.94040.93950.9264
100.89140.93170.93140.92960.9198
Average0.92570.92290.92060.91960.9076
Table 5. Accuracy scores of 6 models (linear discriminant analysis (LDA), multi-layer perception (MLP), decision tree (DT), rinciple component analysis (PCA)+LDA, convolutional neural networks and support vector machine (CNN+SVM), OLCE). In the PCA–LDA model, grid search for finding the best number of dimensions using PCA, which reduced to 49 dimensions.
Table 5. Accuracy scores of 6 models (linear discriminant analysis (LDA), multi-layer perception (MLP), decision tree (DT), rinciple component analysis (PCA)+LDA, convolutional neural networks and support vector machine (CNN+SVM), OLCE). In the PCA–LDA model, grid search for finding the best number of dimensions using PCA, which reduced to 49 dimensions.
ModelsPredictionsMax.Min.Ave.Var.
1st2nd3rd4th5th6th7th8th9th10th
LDA0.90290.92570.93140.86860.89710.90290.93140.92000.90860.88000.93140.86860.90690.0005
MLP0.43420.21140.12000.15420.37710.26280.22850.14280.15420.28000.43420.12000.23650.0109
DT0.86290.81140.85140.76000.85140.79430.84000.82290.88570.81710.88570.76000.82970.0013
PCA-LDA0.28570.43420.32000.52000.40570.15420.44000.18280.43420.32000.52000.15420.34970.0141
CNN-SVM0.93710.90850.91420.90280.93140.90850.90280.85140.93140.90850.93710.85140.90970.0006
OLCE0.91420.88000.94850.94280.97140.92000.89710.94280.94850.89140.97140.88000.92570.0009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wen, T.; Mo, Z.; Li, J.; Liu, Q.; Wu, L.; Luo, D. An Odor Labeling Convolutional Encoder–Decoder for Odor Sensing in Machine Olfaction. Sensors 2021, 21, 388. https://doi.org/10.3390/s21020388

AMA Style

Wen T, Mo Z, Li J, Liu Q, Wu L, Luo D. An Odor Labeling Convolutional Encoder–Decoder for Odor Sensing in Machine Olfaction. Sensors. 2021; 21(2):388. https://doi.org/10.3390/s21020388

Chicago/Turabian Style

Wen, Tengteng, Zhuofeng Mo, Jingshan Li, Qi Liu, Liming Wu, and Dehan Luo. 2021. "An Odor Labeling Convolutional Encoder–Decoder for Odor Sensing in Machine Olfaction" Sensors 21, no. 2: 388. https://doi.org/10.3390/s21020388

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop