Next Article in Journal
Siamese Reconstruction Network: Accurate Image Reconstruction from Human Brain Activity by Learning to Compare
Next Article in Special Issue
Redundancy Removed Dual-Tree Discrete Wavelet Transform to Construct Compact Representations for Automated Seizure Detection
Previous Article in Journal
Urban Layout Optimization Based on Genetic Algorithm for Microclimate Performance in the Cold Region of China
Previous Article in Special Issue
Expulsion Identification in Resistance Spot Welding by Electrode Force Sensing Based on Wavelet Decomposition with Multi-Indexes and BP Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Task Learning for Multi-Dimensional Regression: Application to Luminescence Sensing

by
Umberto Michelucci
1,* and
Francesca Venturini
1,2
1
TOELT LLC, Birchlenstr. 25, 8600 Dübendorf, Switzerland
2
Institute of Applied Mathematics and Physics, Zurich University of Applied Sciences, Technikumstrasse 9, 8401 Winterthur, Switzerland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(22), 4748; https://doi.org/10.3390/app9224748
Submission received: 30 September 2019 / Revised: 28 October 2019 / Accepted: 3 November 2019 / Published: 7 November 2019
(This article belongs to the Special Issue Intelligence Systems and Sensors)

Abstract

:

Featured Application

Multi-task learning; multi-parameter luminesce sensing.

Abstract

The classical approach to non-linear regression in physics is to take a mathematical model describing the functional dependence of the dependent variable from a set of independent variables, and then using non-linear fitting algorithms, extract the parameters used in the modeling. Particularly challenging are real systems, characterized by several additional influencing factors related to specific components, like electronics or optical parts. In such cases, to make the model reproduce the data, empirically determined terms are built in the models to compensate for the difficulty of modeling things that are, by construction, difficult to model. A new approach to solve this issue is to use neural networks, particularly feed-forward architectures with a sufficient number of hidden layers and an appropriate number of output neurons, each responsible for predicting the desired variables. Unfortunately, feed-forward neural networks (FFNNs) usually perform less efficiently when applied to multi-dimensional regression problems, that is when they are required to predict simultaneously multiple variables that depend from the input dataset in fundamentally different ways. To address this problem, we propose multi-task learning (MTL) architectures. These are characterized by multiple branches of task-specific layers, which have as input the output of a common set of layers. To demonstrate the power of this approach for multi-dimensional regression, the method is applied to luminescence sensing. Here, the MTL architecture allows predicting multiple parameters, the oxygen concentration and temperature, from a single set of measurements.

1. Introduction

The classical use of regression in physics, sometimes also referred to as non-linear fitting, is to try to determine d quantities y R d from a set of n measurements x R q with q N , using a theoretical mathematical model y = f ( x , w ) that depends on a certain number p of parameters w R p . Typically, this is achieved by choosing the parameters w to minimize a selected error function, like the mean square error (MSE), with specific algorithms. To find the best solution for f is a classical optimization problem [1,2,3]. This method, however, fails to deliver stable and accurate results, for example, when the quantities y i with i = 1 , , d have different physical meanings and, consequently, depend on different components of the parameter vector w in fundamentally distinct ways. As a result, the mathematical model may be an insufficient approximation, may be too complex for a stable implementation or may be simply unknown [3].
An example where the usual multi-dimensional regression approach fails is in the determination of a substance from changes in its luminescence when several environmental conditions vary in an unknown and uncontrolled way. Luminescence quenching for oxygen detection represents a widespread application relevant in many fields like biomedical imaging, environmental monitoring, or process control [4] (see Section 4 for details). In this application, the quantity of interest is the concentration of molecular oxygen [ O 2 ] . The measured quantity, either the luminescence intensity or luminescence intensity decay time of a special molecule (luminophore), is however, equally strongly dependent on the concentration [ O 2 ] and the temperature T. As a result, it is difficult to extract two different physical quantities, namely, [ O 2 ] and T, from the same set of data. Usually, T is measured separately with another device and given as an input to a mathematical model describing the dependency of those two quantities from the input data. The complexity increases further if more than one luminophore is present, and several parameters (e.g., [ O 2 ] , [ C O 2 ] , p H ) have to be determined [5,6,7,8,9].
A possible method, which recently attracted great interest, is the use of feed-forward neural network (FFNN) architectures, with a certain number of hidden layers and an appropriate number of output neurons, each responsible for predicting the desired variables y i with i = 1 , , d . In the example of oxygen sensing, the output layer would have a neuron for the oxygen concentration [ O 2 ] and one for the temperature T. This work shows that, since the output neurons must use the same features (the output of the last hidden layer) for all variables [10,11], FFNNs are insufficiently flexible. For the cases when the variables depend on fundamentally different ways from the inputs, this approach will give a result that is at best acceptable, and at worst unusable.
This work proposes a new approach, which is based on multi-task learning (MTL) neural network architectures. This type of architectures are characterized by multiple branches of layers, that get their input from a common set of layers. This type of network can improve the model prediction performance by jointly learning correlated tasks [10,11,12,13,14]. In particular, the proposed MTL architectures are applied to the problem of luminescence quenching for oxygen sensing. Their performance in the prediction of oxygen concentration and temperature is analyzed and compared to that of a classical feed-forward neural network.
In general, the proposed MTL approach may be of particular relevance in all those cases where the mathematical model y = f ( x , w ) is not known, too complex or not really of interest and the only goal of the regression problem is to build a system that is able to determine y as accurately as possible.
The paper is organized as follows: Section 2 describes non-linear regression and MTL with neural networks. Section 3 describes the implementation of MTL and the different neural network studied in this work. Section 4 reviews luminescence quenching for oxygen sensing. The results are discussed in Section 5.

2. Theoretical Background

This section briefly reviews the theoretical justification for non-linear regression with neural networks, as well as the multi-task learning approach implemented in this work.

2.1. Neural Networks for Non-Linear Regression Problems

In general, a neural network model is always composed of three parts [15]:
  • network architecture (number of layers, activation functions, etc.),
  • cost function,
  • optimizer (a method or algorithm used to minimize the cost function).
The neural networks considered in this work have a feed-forward architecture, as it is typical in regression problems. The details of the networks are described in Section 3. The cost function needs to be chosen depending on the problem to be solved. For example, the cross-entropy is a common choice when solving classification problems [15]. For regression problems, as the one studied in this work, the most common cost function is the mean square error (MSE), which is defined as
M S E = 1 n j = 1 n k = 1 d ( y k [ j ] y ^ k [ j ] ) 2
where n is the number of observations in the input dataset; y [ j ] R d is the measured value of the desired quantity for the jth observation (indicated as a superscript between square brackets), with j = 1 , , n ; y ^ [ j ] R d is the output of the network, when evaluated on the jth observation. The optimizer affects the learning performance of the network but does not determine the type of problems the network can solve and therefore will not be discussed here.
A regression problem consists of minimizing the cost function, in this case the MSE (Equation (1)), with respect to the learnable parameters of the network, which are defined in the architecture. The implicit assumption done is that there is an underlying albeit unknown function that describes the relationship between the y [ j ] and the input observations (the measurements x [ j ] ). Assuming its existence, the neural networks try to approximate it, by composing a big number of non-linear functions. This approach relies on the implicit assumption that a network can approximate any function. For FFNN, this assumption is legitimate since it was proved mathematically [16,17,18,19,20,21,22,23]. This mathematical proof thus justifies the use of neural networks for regression problems. Unfortunately, not being a constructive proof, it provides neither the number of layers nor the number of neurons per layer needed to approximate this unknown function. It just tells that, with enough neurons, a neural network is able to approximate any function.

2.2. Multi-Task Learning

Multi-task learning is a machine learning techniques in which n T learning tasks are solved at the same time, using commonalities and differences across tasks. This approach may result in improved learning efficiency and prediction accuracy [12,13,14,24], although the possibility of improvement depends on how information is encoded in the data. In this work, MTL will be applied, for the first time, to luminescence sensing, where the luminescence data are dependent on two quantities, oxygen concentrations and temperature, which are otherwise hard to extract separately.
An example of a simple MTL network architecture, which reflects the architectures later used in the paper, is shown in Figure 1. This network consists of a series of common hidden layers, followed by two branches ( n T = 2 ) each consisting of several task-specific hidden layers.
The layers marked in Figure 1 as “common hidden layers” generate an output, that is typically called a “shared representation”. The name comes from the fact that the output of those layers is used to evaluate both y 1 and y 2 . The shared representation is then the input of a set of “task-specific hidden layers”, that learn how to predict y 1 and y 2 better. Note how the common hidden layers are shared with both the tasks of predicting y 1 and y 2 , while the task-specific hidden layers are specific to each task separately. The MTL network of Figure 1 uses the common hidden layers to find common features beneficial to each of the two tasks. During the training phase, learning to predict y 1 will influence the common hidden layers and therefore, the prediction of y 2 , and vice-versa. A set of task-specific hidden layers will then learn specific features to each output and therefore improve the prediction accuracy. The implicit assumption here is that the tasks have something in common; otherwise this approach will not produce the desired result.
Multiple cost functions L i with i = 1 , , n T , with n T the number of tasks, are required to use this network architecture. In the training phase, a global cost function L, defined as a linear combination of the task-specific cost functions with weights α i will be minimized
L = i = 1 n T α i L i .
The parameters α i have to be determined during the hyper-parameter tuning phase to optimize the network predictions. In this paper, being the cost function the MSE (Equation (1)), the global cost function of Equation (2) is
L = i = 1 n T α i 1 n j = 1 n k = 1 d ( y k [ j ] y ^ k [ j ] ) 2
where n T is the number of tasks; n is the number of observations in the input dataset; y [ j ] R d is the measured value of the desired quantity for observation j, with j = 1 , , n ; y ^ [ j ] R d is the output of the network, when evaluated on the jth observation.

3. Neural Network Architectures and Implementation

In this paper, three architectures, one classical FFNN and two MTL, were investigated and compared in the simultaneous prediction of oxygen concentration and temperature. To make the comparison meaningful, the parameters, which are not architecture-specific, were not varied. The details of the architectures are described in the next subsections.
In the three architectures investigated the sigmoid activation functions was used for all the neurons
σ ( z ) = 1 1 + e z .
All the results were obtained with a training of 4000 epochs. The target variables y were normalized to vary between 0 and 1. Thus, the sigmoid activation function was used also for the output neurons y 1 and y 2 . The input measurement, as will be explained in detail in Section 4, is a vector in R q with q = 16 .
To minimize the cost function, the optimizer Adaptive Moment Estimation (Adam) [15,25] was used. The training was performed with a starting learning rate of 10 3 and using batch-learning, which means that the weights were updated only after the entire training dataset has been fed to the network. Batch-learning was chosen because of its stability and speed since it reduces the training time of a few orders of magnitude in comparison to, for example, stochastic gradient descent [15]. Therefore, it makes experimenting with different networks a feasible endeavor. The implementation was performed using the TensorFlow library.

3.1. Network A

The first type of neural network investigated has a classical feed-forward architecture, consisting of an input layer, three hidden layers, and an output layer with two neurons [ O 2 ] p r e d and T p r e d . This architecture, labeled here as Network A, is schematically shown in Figure 2. The number of neurons of each hidden layer n i = n ^ is the same.
Each neuron in each layer gets as input the output of all neurons in the previous layer, and feeds its output to each neuron in the subsequent layer. To test the performance network A hyperparameter tuning was performed by varying the number of neurons in each of the four layers ( n ^ ). The number of neurons that was tested is n ^ = 10 , 30 , 50 , 80 . Additional hyperparameters, like the learning rate, were not optimized and the mentioned values were kept constant.

3.2. Network B

The first MTL network studied is depicted in Figure 3. It consists of three common hidden layers with 50 neurons each, followed by two branches, one with two additional task-specific hidden layers used to predict [ O 2 ] , and one branch without hidden layers used to predict both [ O 2 ] and T at the same time. The number of neurons of each task-specific hidden layer is 5. The idea behind this network is to have a system that learns to predict [ O 2 ] well, thanks to the further task-specific layers. The predicted T is not expected to be exceptionally good since the common hidden layers must learn to predict [ O 2 ] p r e d and T p r e d at the same time. This architecture can be of applied when one of the outputs y i , here [ O 2 ] , needs to be predicted with higher accuracy than the other ones. For this network, the global cost function weights used were α 1 = 0.3 and α 2 = 5 .

3.3. Network C

The last MTL network, depicted in Figure 4, consists again of three common hidden layers with 50 neurons each, followed by three branches, two with each two additional task-specific layers to predict respectively [ O 2 ] and T, and then one without additional layers to predict [ O 2 ] and T at the same time. The number of neurons of each task-specific hidden layer is 5, as in the network B. The global cost function weights used for the plots were α 1 = 0.3 , α 2 = 5 and α 3 = 1 . Those values were chosen because they result in the lowest MAEs (see discussion in Section 5).
This network is of interest because of the additional task-specific layers, which are expected to improve the ability of predicting the temperature compared to the network B.

3.4. Metrics

The metric used to compare results from different network models is the absolute error ( A E ) defined as the absolute value of the difference between the predicted and the expected value for a given observation. For the oxygen concentration of the jth observation [ O 2 ] [ j ] the A E is
A E [ O 2 ] [ j ] = | [ O 2 ] p r e d [ j ] [ O 2 ] m e a s [ j ] | .
The further quantity used to analyze the performance of the network is the mean absolute error ( M A E ), defined as the average of the absolute value of the difference between the predicted and the expected oxygen concentration or temperature. For example, for the oxygen prediction using the training dataset S t r a i n , M A E [ O 2 ] is defined as
M A E [ O 2 ] ( S t r a i n ) = 1 | S t r a i n | j S t r a i n | [ O 2 ] p r e d [ j ] [ O 2 ] r e a l [ j ] |
where | S t r a i n | is the size (or cardinality) of the training dataset. For example, in this work | S t r a i n | = 20,000. The A E T and M A E T are similarly defined.

4. Luminescence Quenching for Oxygen and Temperature Sensing

To demonstrate its advantages, the MTL approach was applied to the simultaneous determination of the oxygen concentration and temperature of a medium. There are different optical methods used to determine oxygen concentration since this is of great relevance for numerous research and application fields, ranging from biomedical imaging, packaging, environmental monitoring, process control, and chemical industry, to mention only a few [26]. Among the optical methods, a well-known approach is based on luminescence quenching [27,28,29].
The measuring principle is based on the quenching of the luminescence of a specific molecule (luminophore) by oxygen molecules. Because of the collisions of the luminophore with oxygen, both the luminescence intensity and decay time are reduced. Sensors based on this principle rely on approximate empirical models to parametrize the dependence of the sensing quantity (e.g., luminescence intensity or intensity decay time) on influencing factors. The most relevant parameter, which can be a major source of error in sensors based on luminescence sensing, is the temperature of the luminophore, since both the luminescence and the quenching phenomena are strongly dependent on temperature [26].
The conventional approach consists in relating the change of the luminescence decay time from the oxygen concentration through a multi-parametric model, called Stern–Volmer equation [28]. The value of the device-specific constants is then determined through calibration. The decay time can be easily measured by modulating the intensity of the excitation. The emitted luminescence is also modulated but shows a phase shift θ which depends on the decay time. Without going into the details of the analytical model, the measured quantity, the phase shift θ , is most frequently related to the oxygen concentration O 2 and temperature T through the approximate equation [30]
tan θ ( ω , T , [ O 2 ] ) tan θ 0 ( ω , T ) = f ( ω , T ) 1 + K S V 1 ( ω , T ) · O 2 + 1 f ( ω , T ) 1 + K S V 2 ( ω , T ) · O 2 1
where θ 0 and θ , respectively, are the phase shifts in the absence and presence of oxygen, f and 1 f indicate the fraction of the total emission of two components under unquenched conditions, K S V 1 and K S V 2 are associated (Stern–Volmer) constants for each component. Since the phenomena of luminescence and luminescence quenching are strongly influenced by the temperature, the parameters θ 0 , K S V 1 , K S V 2 , and f need to be modeled through different temperature dependencies [30]. The value of the parametrisation quantities is determined through non-linear regression. ω is the angular frequency of the modulation of the excitation light. Finally, Equation (7) must be inverted to obtain O 2 as a function of θ , T, and ω . To be able to have more information as input to our network, we will not use a single ω frequency value, but 16. Let’s define
r ( ω , T , [ O 2 ] ) tan θ ( ω , T , [ O 2 ] ) tan θ ( ω , T , [ O 2 ] = 0 ) .
The goal of the network is to predict the oxygen concentration and temperature from an array of values of r ( ω , T , [ O 2 ] ) evaluated at a discrete set of sixteen ω i , with i = 1 , 16 , that have been used for the measurements. The jth measurement can be written as x [ j ] = ( r 1 [ j ] , r 2 [ j ] , , r 16 [ j ] ) with r i [ j ] = r ( ω i , T [ j ] , [ O 2 ] [ j ] ) and i = 1 , 16 . Each measurement j corresponds to a specific tuple of the oxygen concentration and temperature ( T [ j ] , [ O 2 ] [ j ] ) .
Summarizing, the conventional approach relies on the measurement of the temperature, which is then used to correct the parameters of the analytical model used to calculate the oxygen concentration O 2 from the measured quantity, the phase shift θ of Equation (7). The inadequate determination of the luminophore temperature is one of the major sources of error in an optical oxygen sensor.
The neural network proposed in this work defies the difficulties described above by simultaneously predicting both the oxygen concentration and the temperature using 16 values of r ( ω , T , [ O 2 ] ) evaluated at a discrete set of sixteen values of ω .

Data Generation

To have a large enough dataset to train and test the neural networks, synthetic data were used. The model described by Equation (7) was chosen to create the data, being as simple as possible but still capable to describe experimental observations. The values of the parameters for the synthetic data were determined from measurement performed under varying oxygen concentration and temperature conditions. For details on the samples and setup used for the determination of all the parameters the reader is referred to [30].
The synthetic data consist of a set S of m = 25,000 observations using oxygen concentration values uniformly distributed between 0% air and 100% air and five temperatures 5 , 15 , 25 , 35 and 45 ° C. Please note that in the following, the concentration of oxygen is be given in% of the oxygen concentration of dry air and indicated with% air. This means that 100% air corresponds to 20% vol O 2 . The m data were split randomly in a training dataset containing 80% of the data ( | S t r a i n | = 20,000), used to train the network, and a development dataset containing 20% of the data ( | S t e s t | = 5000), used to test the generalization efficiency of the network when applied to unseen data.
Typically, when training neural network models, it is important to check if we are in a so-called overfitting regime. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise or errors) as if that variation represented an underlying model structure [31]. In the case discussed in this work, with increasing network complexity, the network will never go into such a regime since the development dataset is a perfect representation of the training dataset. This leads to almost identical metric values for the M A E for both S t r a i n and S d e v , regardless of the network architecture effective complexity. This is what we observed while checking the metrics on the two different dataset S t r a i n and S d e v . Overfitting becomes relevant when dealing with real measurements and not synthetic data.

5. Results and Discussion

As described in Section 4, the applied problem investigated in this work is a complex one since the two quantities to be extracted from the data ( [ O 2 ] and T) depend from the input in different ways. It is, therefore, not obvious that is possible to build a model which is able to predict both [ O 2 ] and T at the same time with good accuracy.
The fist network investigated is the simple FFNN A described in Section 3.1. For this network, the number of neurons was progressively increased ( n ^ = 10 , 30 , 50 , 80 ) to study how A E [ O 2 ] and A E T are affected by an increasingly complex network and to determine if it is possible to obtain a good prediction. The calculated A E [ O 2 ] for different O 2 concentrations were grouped in bins of 10% air for a clearer illustration and are shown in Figure 5 as a box plot, where the median is visible as a red line. In all the boxplots in this paper the central box is the interquartile range and contains the middle 50% of the results, while the whiskers indicates the minimum and maximum of all the data [32].
As can be seen in Figure 5, the results are quite poor if n ^ = 30 (results for n ^ = 10 are comparable to those with n ^ = 30 and are not shown here). A E [ O 2 ] can assume values as big as 18% air, with a broad distribution. Increasing the number of neurons in the hidden layers to n ^ = 50 improves the prediction, reducing both the median and the distribution. A further increase to n ^ = 80 , however, does not result in better a prediction, showing the limits of this architecture to capture the details of the physical system.
The results for the prediction of the temperature for the same three networks are shown in Figure 6. Also A E T improves initially by increasing the number of neurons to n ^ = 50 , but does not get any better when the number of neurons is further increased to n ^ = 80 . The boxplots of Figure 5 and Figure 6 show that A E [ O 2 ] and A E T can assume quite high values, therefore demonstrating how the model is not really able to make a prediction with an accuracy that may be used in any commercial application.
The performance of the three FFNN of type A can be summarized calculating the M A E as defined in Equation (6). The results are listed in Table 1. Consistently with what previously observed for the absolute error, the best network performance is obtained with n ^ = 50 , achieving a mean absolute error of M A E [ O 2 ] = 1.7 % air and M A E T = 3.3 ° C.
For a practical application, the probability density distributions of the A E s for both parameters represent a much fundamental quantity since it carries information on the probability of the network to predict the expected value. For this reason, the kernel density estimate (KDE) of the distributions of the A E s was used for analysis. KDE is a non-parametric algorithm to estimate the probability density function of a random variable by inferring the population distribution based on a finite data sample [33]. For the plots we have used a Gaussian Kernel and a Scott bandwidth estimation using the seaborn Python package [34]. The results for A E [ O 2 ] and A E T for the three variations of FFNN A are shown in Figure 7 and Figure 8, respectively.
From Figure 7 and Figure 8 can clearly be seen that increasing the number of neurons helps at the beginning. A further increase in n ^ does not produce an improvement in prediction quality, on the contrary it gets worse. These results indicate that this simple FFNN can extract at the same time the two quantities with an accuracy which is at best poor and at worst unusable.
Networks B and C try to address this problem by adding, as described in previous sections, respectively one and two branches after the last hidden layer in network A. The results of the prediction from the networks B and C are then compared to those from network A with n ^ = 50 . Figure 9 shows the calculated A E [ O 2 ] for the three networks for the same [ O 2 ] intervals as before as a box plot, where the median is visible as a red line.
As it can be seen from Figure 9, the error in the prediction of network B is similar to that of network A. However, A E [ O 2 ] is significantly improved when using network C. The additional branch in network C compared to network B clearly make the predictions much more accurate and, more importantly, much less spread around the median.
The distribution of the A E [ O 2 ] is better illustrated by plotting the KDE (Figure 10). The results indicate that the distribution assumes much smaller values and is peaked around zero for network C, in contrast with network A and B that have a quite wide tail that propagates toward higher values, reaching values as high as 10% air for network A and 8% air for network B.
Finally, the results of the same analysis for the prediction of the temperature are shown in Figure 11. Here the calculated A E T for the same three networks is shown as a box plot, where the median is visible as a red line.
As it can be seen from Figure 11, A E T is much more concentrated around the median when using network C. These results indicate that the prediction of the temperature is substantially improved when using this network.
The distribution of the A E T using the KDE is shown in Figure 12. Thanks to the additional task-specific hidden layer of the network C compared to network B, the KDE is higher and peaked around zero, with practically no contributions above 5 ° C.
Finally, the performance of the three neural networks are be summarized by calculating the M A E as defined in Equation (6) for the oxygen concentration and the temperature prediction. The results are listed in Table 2. The network C outperforms all the other networks analyzed in predicting both [ O 2 ] and T, achieving a mean absolute error of only 0.5% air for the oxygen concentration and of 2.2 ° C for the temperature.
The results of Table 2 show that a simple FFNN as network A is not suitable to extract the two quantities of interest at the same time with good accuracy, since it is not flexible enough. The reason is that the two predicted quantities will depend on the same set of features generated by the hidden layers of network A. When network A tries to learn better weights to predict, for example, the temperature, these will, however, influence also the [ O 2 ] prediction and vice-versa. So the common set of weights that are learned can not be optimized for each quantity separately at the same time. The MTL network B tries to address this problem with a separate branch of task-specific layers for [ O 2 ] . The tests show however that this architecture is only marginally better for the prediction of [ O 2 ] and even worse for the prediction of T. This is probably due an insufficient flexibility of the network and shows that even if only one parameter were of interest, e.g., [ O 2 ] one single additional branch is not sufficient. A significant improvement is achieved with the MTL network C: the two task-specific branches give the network the flexibility of learning a set of weights (the ones in the branches) specific to each quantity, therefore achieving exceptionally good predictions on both [ O 2 ] and T. Note that in this work the hyper-parameter tuning [15] for each network was not performed since the goal is not to achieve the lowest possible M A E s but rather to demonstrate the advantages and potential of MTL compared to classical FFNN approaches. For the implementation in a measuring instrument, therefore, a further phase of parameter tuning specifically dependent on the application would be needed.
An interesting question is what is the mutual influence of the branches in network C when the loss weights α i are varied. To answer this question, a study was performed with various values of the global cost function weights. The results are shown in Table 3.
By increasing progressively the weight for the temperature branch, α 3 , the MAE [ T ] is not reduced further and appears rather insensitive to α 3 . However, M A E [ O 2 ] increases slightly, since the higher values of α 3 shift the relative importance of the tasks the network is trying to learn. Increasing the weight for the oxygen branch α 2 negatively affects the oxygen prediction since M A E [ O 2 ] increases slightly. The reason why this is happening is that the α 2 is becoming much bigger than α 1 . This shows that for the prediction of the oxygen concentration both the branches predicting T and [ O 2 ] at the same time and the one predicting [ O 2 ] are important: neglecting one will make the other works less efficiently. The temperature, on the other had, is predicted almost with the same kind of accuracy independently of the weights α 2 , indicating that the temperature branch is not dependent from the [ O 2 ] branch.

6. Conclusions

In this work, different neural networks architectures were investigated to solve the problem of extracting multiple separate physical quantities at the same time from a single dataset. This type of multi-dimensional regression problems in physics can be challenging or impossible to solve if the mathematical models describing the functional dependence of the dependent variable from a set of independent variables are too complex or unknown. The proposed approach consists in using neural network MTL architectures, which are characterized by a common set of layers and then task-specific layers for each quantity to be determined. Thanks to the additional task-specific hidden layers, this type of networks can be trained to perform better than conventional FFNNs when the quantities to be predicted are characterized by a significant difference in physical behavior.
The approach is demonstrated by applying it to an oxygen luminescence sensing application. The conventional methods rely on a separate temperature determination which is then used as input to correct the extraction of the oxygen concentration from a dataset. This work demonstrates how it is possible to extract from a single dataset of phase shift measurements both the oxygen concentration and the temperature of the medium. The distributions of A E [ O 2 ] and A E T are significantly narrower and much more concentrated around zero with the proposed MTL network (type C), as compared to FFNNs without specific and dedicated layers for each [ O 2 ] and T. With the latter network the predictions are only based on common features (the ones generated by the common layers) that fail to be flexible enough to describe both [ O 2 ] and T. The results indicate that from one single measurement, it is possible to determine two physically different quantities, one of which is dependent from the other. To the best of the authors’ knowledge, this is the first time that more than one parameter (here [ O 2 ] and T) are extracted using a single luminophore by a single measurement channel under constant conditions. The implication is that a sensor using the proposed approach could be able to extract much more information from the measurements than one based on conventional analytical modeling.
This work aimed to open the road to new ways of extracting multiple physical quantities from a common set of data at the same time to achieve consistent results that are both accurate and stable. The described approach is relevant for many practical applications in sensor science and demonstrates that MTL architectures have the potential of revolutionizing the approach to non-linear multi-dimensional regression.

Author Contributions

Conceptualization, U.M. and F.V.; methodology, U.M. and F.V.; software, U.M.; writing, U.M. and F.V.; physics model and examples, F.V.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
FFNNFeed-forward neural networks
MTLMulti-task learning
MSEMean square error
AEAbsolute error
MAEMean average error
KDEKernel density estimate

References

  1. Nocedal, J.; Wright, S.J. Numerical Optimization; Glynn, P., Robinson, S.M., Eds.; Springer: New York, NY, USA, 1999. [Google Scholar]
  2. Boyd, S.P.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2019; p. 129. [Google Scholar]
  3. James, G.; Witten, D.; Hastie, T.; Tibshirani, R. An Introduction to Statistical Learning; Springer: New York, NY, USA, 2013. [Google Scholar]
  4. Borisov, S.M. Fundamentals of Quenched Phosphorescence O2 Sensing and Rational Design of Sensor Materials. In Quenched-phosphorescence Detection of Molecular Oxygen, 1st ed.; Papkovsky, D.B., Dmitriev, R.I., Eds.; Royal Society of Chemistry: Oxford, UK, 2018; pp. 1–18. [Google Scholar]
  5. Baleizão, C.; Nagl, S.; Schäferling, M.; Berberan-Santos, M.N.; Wolfbeis, O.S. Dual Fluorescence Sensor for Trace Oxygen and Temperature with Unmatched Range and Sensitivity. Anal. Chem. 2018, 80, 6449–6457. [Google Scholar] [CrossRef] [PubMed]
  6. Collier, B.B.; McShane, M.J. Simultaneous, accurate lifetime determination of two luminophores using time-domain techniques. Sensors 2011, 943–946. [Google Scholar] [CrossRef]
  7. Pérez de Vargas-Sansalvador, M.; Martinez-Olmos, A.; Palma, A.J.; Fernández-Ramos, M.D.; Capitán-Vallvey, L.F. Compact optical instrument for simultaneous determination of oxygen and carbon dioxide. Microchim. Acta 2011, 172, 455–464. [Google Scholar] [CrossRef]
  8. Lam, H.; Rao, G.; Loureiro, J.; Tolosa, L. Dual Optical Sensor for Oxygen and Temperature Based on the Combination of Time Domain and Frequency Domain Techniques. Talanta 2011, 84, 65–70. [Google Scholar] [CrossRef] [PubMed]
  9. Borisov, S.M.; Seifner, R.; Klimant, I. A novel planar optical sensor for simultaneous monitoring of oxygen, carbon dioxide, pH and temperature. Anal. Bioanal. Chem. 2011, 400, 2463–2474. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Zhang, Y.; Yang, Q. A Survey on Multi-Task Learning. arXiv 2018, arXiv:1707.08114. [Google Scholar]
  11. Thung, K.H.; Wee, C.-Y. A brief review on multi-task learning. Multimed. Tools Appl. 2018, 77, 29705–29725. [Google Scholar] [CrossRef]
  12. Thrun, S. Is learning the n-th thing any easier than learning the first? Adv. Neural Inf. Process. Syst. 1996, 8, 640–646. [Google Scholar]
  13. Baxter, J. A model of inductive bias learning. J. Artif. Intell. Res. 2000, 12, 149–198. [Google Scholar] [CrossRef]
  14. Caruana, R. Multi-task learning. Mach. Learn. 1997, 28, 41–75. [Google Scholar] [CrossRef]
  15. Michelucci, U. Applied Deep Learning—A Case-Based Approach to Understanding Deep Neural Networks; Apress Media, LLC: New York, NY, USA, 2018; pp. 374–375. [Google Scholar]
  16. Irie, B.; Miyake, S. Capabilities of three-layered perceptrons. In Proceedings of the IEEE International Conference on Neural Networks, San Diego, CA, USA, 24–27 July 1988; pp. 641–648. [Google Scholar]
  17. Hornik, K. Approximation Capabilities of Multilayer Feedforward Networks. Neural Netw. 1991, 4, 251–257. [Google Scholar]
  18. Cybenko, G. Approximation by Superpositions of a Sigmoidal Function. Math. Control Signal Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  19. Hanin, B. Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations. Mathematics 2019, 7, 992. [Google Scholar] [CrossRef]
  20. Lu, Z.; Pu, H.; Wang, F.; Hu, Z.; Wang, L. The expressive power of neural networks: A view from the width. In Proceedings of the 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 6231–6239. [Google Scholar]
  21. Rojas, R. Neural Networks—A Systematic Introduction; Springer: Berlin/Heidelberg, Germany, 1996; pp. 267–271. [Google Scholar]
  22. Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Norfolk, UK, 2005; pp. 139–140. [Google Scholar]
  23. Sprecher, D. On the structure of Continuous Functions of Several Variables. Trans. Am. Math. Soc. 1964, 115, 340–355. [Google Scholar] [CrossRef]
  24. Argyriou, A.; Evgeniou, T.; Pontil, M. Multi-task feature learning. In Proceedings of the 19th International Conference on Neural Information Processing Systems (NIPS’06), Vancouver, BC, Canada, 4–7 December 2006; MIT Press: Cambridge, MA, USA, 2006; pp. 41–48. [Google Scholar]
  25. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
  26. Wang, X.-D.; Wolfbeis, O.S. Optical methods for sensing and imaging oxygen: materials, spectroscopies and applications. Chem. Soc. Rev. 2014, 43, 3666–3761. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Wolfbeis, O.S. Optical Technology until the Year 2000: An Historical Overview. In Optical sensors: Industrial Environmental and Diagnostic Applications, 1st ed.; Narayanaswamy, R., Wolfbeis, O.S., Eds.; Springer: Berlin, Germany, 2004; pp. 28–30. [Google Scholar]
  28. Lakowicz, J.R. Principles of Fluorescence Spectroscopy, 3rd ed.; Springer: Singapore, 2006. [Google Scholar]
  29. Demas, J.N.; DeGraff, B.A.; Coleman, P.B. Oxygen Sensors Based on Luminescence Quenching. Anal. Chem. 1999, 71, 793A–800A. [Google Scholar] [CrossRef] [PubMed]
  30. Michelucci, U.; Baumgartner, M.; Venturini, F. Optical oxygen sensing with artificial intelligence. Sensors 2019, 19, 777. [Google Scholar] [CrossRef] [PubMed]
  31. Burnham, K.P.; Anderson, D.R. Model Selection and Multimodel Inference, 2nd ed.; Springer: New York, NY, USA, 2002. [Google Scholar]
  32. McGill, R.; Tukey, J.W.; Larsen, W.A. Variations of Box Plots. Am. Stat. 1978, 32, 12–16. [Google Scholar]
  33. Hastie, T.; Tibshirani, R.; Friedman, J. The Elements of Statistical Learning; Springer Science-Business Media, LLC: New York, NY, USA, 2013; pp. 208–209. [Google Scholar]
  34. Waskom, M.; Botvinnik, O.; O’Kane, D.; Hobson, D.; Lukauskas, S.; Gemperline, D.; Qalieh, A. A Python Visualisation Library. Available online: http://doi.org/10.5281/zenodo.883859 (accessed on 21 October 2019).
Figure 1. Example of a MTL network architecture with two tasks and two outputs.
Figure 1. Example of a MTL network architecture with two tasks and two outputs.
Applsci 09 04748 g001
Figure 2. Architecture of the feed-forward network A.
Figure 2. Architecture of the feed-forward network A.
Applsci 09 04748 g002
Figure 3. Architecture of the feed-forward MTL network B.
Figure 3. Architecture of the feed-forward MTL network B.
Applsci 09 04748 g003
Figure 4. Architecture of the feed-forward MTL network C.
Figure 4. Architecture of the feed-forward MTL network C.
Applsci 09 04748 g004
Figure 5. Absolute error A E [ O 2 ] in the prediction of the O 2 concentration for the different concentration ranges using network A. Left: 30 neurons per hidden layer; middle: 50 neurons per hidden layer, right: 80 neurons per hidden layer.
Figure 5. Absolute error A E [ O 2 ] in the prediction of the O 2 concentration for the different concentration ranges using network A. Left: 30 neurons per hidden layer; middle: 50 neurons per hidden layer, right: 80 neurons per hidden layer.
Applsci 09 04748 g005
Figure 6. Absolute error A E T in the prediction of T for the different temperatures using network A. Left: 30 neurons per hidden layer; middle: 50 neurons per hidden layer, right: 80 neurons per hidden layer.
Figure 6. Absolute error A E T in the prediction of T for the different temperatures using network A. Left: 30 neurons per hidden layer; middle: 50 neurons per hidden layer, right: 80 neurons per hidden layer.
Applsci 09 04748 g006
Figure 7. Kernel density estimation for A E [ O 2 ] with network A. Left: 30 neurons per hidden layer; middle: 50 neurons per hidden layer, right: 80 neurons per hidden layer.
Figure 7. Kernel density estimation for A E [ O 2 ] with network A. Left: 30 neurons per hidden layer; middle: 50 neurons per hidden layer, right: 80 neurons per hidden layer.
Applsci 09 04748 g007
Figure 8. Kernel density estimation for A E T for network A. Left: 30 neurons per hidden layer; middle: 50 neurons per hidden layer, right: 80 neurons per hidden layer.
Figure 8. Kernel density estimation for A E T for network A. Left: 30 neurons per hidden layer; middle: 50 neurons per hidden layer, right: 80 neurons per hidden layer.
Applsci 09 04748 g008
Figure 9. Absolute error in the prediction of the O 2 concentration for the different concentration ranges using network A, B, and C. Left: Network A with 50 neurons per hidden layer; middle: network B, right: network C.
Figure 9. Absolute error in the prediction of the O 2 concentration for the different concentration ranges using network A, B, and C. Left: Network A with 50 neurons per hidden layer; middle: network B, right: network C.
Applsci 09 04748 g009
Figure 10. Kernel density estimation for A E [ O 2 ] for networks A (left), B (middle), and C (right).
Figure 10. Kernel density estimation for A E [ O 2 ] for networks A (left), B (middle), and C (right).
Applsci 09 04748 g010
Figure 11. Absolute error in the prediction of the temperature using network A, B, and C. Left: Network A with 50 neurons per hidden layer; middle: network B, right: network C.
Figure 11. Absolute error in the prediction of the temperature using network A, B, and C. Left: Network A with 50 neurons per hidden layer; middle: network B, right: network C.
Applsci 09 04748 g011
Figure 12. Kernel density estimation for A E T for networks A (left), B (middle), and C (right).
Figure 12. Kernel density estimation for A E T for networks A (left), B (middle), and C (right).
Applsci 09 04748 g012
Table 1. Summary of the performance for the FFNNs A.
Table 1. Summary of the performance for the FFNNs A.
n ^ MAE [ O 2 ] MAE [ T ]
306.0% air9.3 ° C
501.7% air 3.3 ° C
802.3% air 4.3 ° C
Table 2. Summary of the performance for the three types of neural networks.
Table 2. Summary of the performance for the three types of neural networks.
Network MAE [ O 2 ] MAE [ T ]
Network A ( n ^ = 30 )6.0% air9.3 ° C
Network A ( n ^ = 50 )1.7% air3.3 ° C
Network A ( n ^ = 80 )2.3% air4.3 ° C
Network B1.5% air6.5 ° C
Network C0.5% air2.2 ° C
Table 3. Summary of the performance for network C with various loss weights.
Table 3. Summary of the performance for network C with various loss weights.
α 1 α 2 α 3 MAE [ O 2 ] MAE [ T ]
0.35.05.00.54% air2.2 ° C
0.35.015.00.61% air2.35 ° C
0.35.025.00.89% air2.32 ° C
0.31.05.00.58% air2.25 ° C
0.315.05.00.94% air2.67 ° C
0.325.05.00.96% air2.55 ° C

Share and Cite

MDPI and ACS Style

Michelucci, U.; Venturini, F. Multi-Task Learning for Multi-Dimensional Regression: Application to Luminescence Sensing. Appl. Sci. 2019, 9, 4748. https://doi.org/10.3390/app9224748

AMA Style

Michelucci U, Venturini F. Multi-Task Learning for Multi-Dimensional Regression: Application to Luminescence Sensing. Applied Sciences. 2019; 9(22):4748. https://doi.org/10.3390/app9224748

Chicago/Turabian Style

Michelucci, Umberto, and Francesca Venturini. 2019. "Multi-Task Learning for Multi-Dimensional Regression: Application to Luminescence Sensing" Applied Sciences 9, no. 22: 4748. https://doi.org/10.3390/app9224748

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop