Next Article in Journal
Stability and Optimal Control of Tree-Insect Model under Forest Fire Disturbance
Previous Article in Journal
New Solitary-Wave Solutions of the Van der Waals Normal Form for Granular Materials via New Auxiliary Equation Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

MultiSec: Multi-Task Deep Learning Improves Secreted Protein Discovery in Human Body Fluids

1
Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, College of Computer Science and Technology, Jilin University, Changchun 130012, China
2
School of Artificial Intelligence, Jilin University, Changchun 130012, China
3
College of Computer Science and Technology, Changchun University, Changchun 130022, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(15), 2562; https://doi.org/10.3390/math10152562
Submission received: 15 June 2022 / Revised: 8 July 2022 / Accepted: 21 July 2022 / Published: 22 July 2022
(This article belongs to the Section Mathematical Biology)

Abstract

:
Prediction of secreted proteins in human body fluids is essential since secreted proteins hold promise as disease biomarkers. Various approaches have been proposed to predict whether a protein is secreted into a specific fluid by its sequence. However, there may be relationships between different human body fluids when proteins are secreted into these fluids. Current approaches ignore these relationships directly, and therefore their performances are limited. Here, we present MultiSec, an improved approach for secreted protein discovery to exploit relationships between fluids via multi-task learning. Specifically, a sampling-based balance strategy is proposed to solve imbalance problems in all fluids, an effective network is presented to extract features for all fluids, and multi-objective gradient descent is employed to prevent fluids from hurting each other. MultiSec was trained and tested in 17 human body fluids. The comparison benchmarks on the independent testing datasets demonstrate that our approach outperforms other available approaches in all compared fluids.

1. Introduction

A large number of molecules are contained in human body fluids, and these molecules are promising as biomarkers for disease diagnosis and therapeutic monitoring [1,2,3]. Among these molecules, one of the essential types of biomarkers is the secreted proteins. Because of this, discovering secreted protein is an important step toward secreted protein biomarker identification. In recent years, although many secreted proteins have been identified through experiments, it remains a challenge to identify new secreted proteins in some human body fluids [4,5]. To facilitate the detection of secreted proteins, several computational approaches have been proposed to predict whether a protein is secreted into a specific fluid [6,7,8,9,10]. These efforts can accelerate the detection of secreted proteins and avoid many unnecessary wet experiments. Nowadays, secreted protein discovery by computational methods has become a well-studied topic in bioinformatics.
Among these approaches, the most successful method uses a support vector machine (SVM) and protein features [6,7]. First, the features of each protein are computed based on their sequence by using some computational tools and websites [11,12]. Second, a feature selection method is used to choose some representative features from those features. Finally, the SVM classifier is used to differentiate the secreted proteins from not secreted proteins based on features selected previously. Although this approach is fast and effective, its weak representative ability limits the performance of secreted protein discovery. Another effective approach is using deep learning and protein sequences [5]. Compared with the previous SVM-based approach, this approach can usually learn more complex features from protein sequences using a convolutional neural network (CNN), long short-term memory (LSTM), etc. [13]. These complex features enhance the representative ability of this approach and promote higher performance. However, deep learning always requires a large amount of data. Due to the limited number of secreted proteins in some human body fluids, the performance in many fluids may suffer from overfitting. Furthermore, for some human body fluids, such as sweat, the number of secreted proteins is too small to learn representative features for prediction. Therefore, an effective approach urgently needs to be presented to obtain a more accurate prediction and enable computational detection in some human body fluids.
These available approaches ignore relationships between different human body fluids. Typically, a protein can be secreted into several human body fluids, which may be related. Therefore, predictions of computational methods in different human body fluids may also be related. When designing a computational approach to secreted protein discovery, relationships between different fluids need to be considered. Multi-task learning is a machine learning method that exploits relationships between tasks to improve the performances of all tasks [14,15]. Thus, we could use the multi-task learning method to take the relationships between human body fluids into account. Prediction of whether a protein is secreted into a specific fluid is regarded as a task. In addition, a shared network for different tasks is beneficial in preventing overfitting [16,17,18]. However, several problems occurred in employing multi-task learning in multi-fluid secreted protein discovery. First of all, many of these human fluids have a really poor number of secreted proteins. As a result, positive samples may be less than negative samples. Even predicting a secreted protein to a specific fluid, the imbalanced dataset is still a problem and needs to be solved [5]. All the datasets for different human body fluids need to be considered simultaneously. In addition, the performance of these human body fluids may conflict with each other [19]. Performance of some fluids might get hurt if they were not coordinated well. To obtain decent performance in all human body fluids, all these problems must be solved.
In this paper, we propose MultiSec, a novel approach that takes advantage of progress in multi-task learning and improves the state-of-the-art performance in secreted protein discovery. MultiSec was designed to simultaneously predict the probability that a protein is secreted into each of 17 human body fluids based on its sequence. This approach is composed of three modules: a balanced sampling module that generates balanced samples for each human body fluid during training, a lightweight convolutional neural network architecture to extract deep features for proteins, and a multi-task classification module that calculates the probabilities for protein to be secreted proteins in 17 human body fluids. Finally, we trained MultiSec on 17 human body fluids, including plasma, saliva, urine, cerebrospinal fluid (CSF), seminal fluid, amniotic fluid, tear fluid, bronchoalveolar lavage fluid (BALF), milk, synovial fluid, nipple aspirate fluid (NAF), cervical–vaginal discharge (CVF), pleural effusion (PE), sputum, exhaled breath condensate (EBC), pancreatic juice (PJ), and sweat. MultiSec achieved a more accurate prediction in all human body fluids with area under the ROC curves of 0.89–0.98. Comparison benchmarks on the independent testing datasets demonstrate that our approach outperforms other state-of-the-art approaches in all the compared human body fluids.

2. Materials and Methods

2.1. Data Collection

To use the relationships between different human body fluids, we need to first construct a multi-task dataset for secreted protein discovery. Therefore, a dataset named SecretedP17 was constructed from the publicly available database Human Body Fluid Proteome (HBFP) [20]. The HBFP database has collected 11,827 experimentally validated secreted proteins in 17 human body fluids. These human body fluids include plasma, saliva, urine, cerebrospinal fluid, seminal fluid, amniotic fluid, tear fluid, bronchoalveolar lavage fluid, milk, synovial fluid, nipple aspirate fluid, cervical–vaginal discharge, pleural effusion, sputum, exhaled breath condensate, pancreatic juice, and sweat. From this database, secreted proteins in 17 human body fluids and corresponding sequences were retrieved. Based on these data, 17 sub-datasets corresponding to 17 fluids were constructed individually.
Taking the collection of plasma sub-dataset as an example, the construction process follows: First, from the HBFP database, proteins that were verified to be secreted into plasma fluid were collected as positive samples. Second, negative samples in plasma fluid were generated based on these positive samples and protein family information by using the method of the previous study [6]. Specifically, negative samples were chosen from those proteins that belong to more than one family and have not been reported as plasma-secreted proteins. If all families of a protein do not contain any plasma-secreted proteins, this protein was regarded as a negative sample. Thus, negative samples in plasma fluid can be obtained based on protein family information. Third, redundant proteins with similar sequences were filtered out to evaluate an accurate performance. The PSI-CD-HIT program was used to calculate the sequence similarity, and proteins are considered redundant if the similarity exceeds 90% [21]. Finally, the plasma sub-dataset is divided into the training, validation, and testing datasets according to 60%, 20%, and 20%. The same proportion of positive samples was kept the same in these datasets. Other sub-datasets were obtained by using a similar process in the other 16 human body fluids. The plasma sub-dataset and other sub-datasets were merged into the SecretedP17 dataset. The SecretedP17 dataset details are shown in Table 1.
Previous processing has collected sequences and secreted labels in 17 human body fluids for proteins. Here, a Position-Specific Score Matrix (PSSM) was calculated for each protein with a similar method to the previous study [5]. The PSSM of each protein was obtained by running the PSI-BLAST program with an E-value threshold of 0.001 and 3 iterations against the UniRef90 database (2020_01) [11,22]. The dimension of the PSSM is L × 20 , where the first dimension L corresponds to the sequence length of the protein, and the second dimension 20 corresponds to the presence of 20 amino acids (aa) in each position. Then, the sigmoid function transformed the PSSM data into values between 0 and 1. After that, the PSSM data were processed into a fixed-length (1000 aa) for efficient computation. If the length of protein sequences exceeds 1000, the first 500 and last 500 positions are merged. For the remaining proteins, 0s of ( 1000 L ) × 20 dimension are padded at the end of their PSSM data. Because of the fixed sliding window strategy in Section 2.2, our model is robust to the procession of input PSSM data. In addition, this process can also keep the the consistency of the input. Finally, for each protein in the SecretedP17 dataset, PSSM data of dimension 1000 × 20 were collected.

2.2. Multi-Fluid Secreted Protein Discovery

Here, multi-fluid secreted protein discovery is regarded as a special case of multi-task classification, where the goal is to predict whether a protein is secreted into each of the human body fluids based on its sequence, and the dataset of each task may be very imbalanced. Figure 1 summarizes the architecture of MultiSec in this paper for secreted protein discovery, which consists of three modules: balanced sampling, feature extraction, and multi-task classification. In the balanced sampling module, 17 groups of proteins corresponding to 17 human body fluids were generated separately. The proportion of positive and negative samples in each group is kept the same. After that, through the feature extraction module, the deep features of these 17 groups of proteins are individually extracted from the PSSM data. In the multi-task classification module, the probabilities for each group of proteins to be secreted into the corresponding human body fluid are computed first. With these probabilities and true labels, losses to these 17 fluids can be calculated. The multi-fluid loss was calculated based on the losses of 17 human body fluids. By optimizing this loss, MultiSec can be trained on these human body fluids simultaneously.

2.2.1. Balanced Sampling

Many fluids have more negative than positive samples in the multi-fluid secreted protein discovery. In addition, neural networks usually prefer to predict the class with a more significant number of samples. Because of this, it is hard to discover the secreted proteins accurately. The imbalance problem of the dataset reduces the performance of secreted protein discovery and needs to be solved. Several methods (under-sampling, over-sampling, bagging, etc.) have been proposed to overcome the imbalanced dataset problem in machine learning [23]. Although the previous study can solve the imbalanced dataset problem, it needs to train several networks [5]. To reduce the computation by training only one network while solving this problem, we propose the balanced sampling module. By independently sampling in a single network training, the imbalanced dataset problems in different human body fluids are addressed while utilizing all samples of different classes as much as possible.
In our opinion, this problem is caused by the large difference in the number of positive and negative samples during training. The class with a large number of samples always accounts for a large proportion of the loss. The large class can always obtain a large gradient to update the network when minimizing loss. Finally, the network tends to predict this class and ignore others. Therefore, we believe this problem could be avoided by keeping the number of different classes the same.
Here, the balanced sampling module is proposed to generate same-size data for human body fluids in multi-fluid secreted protein discovery. First, each dataset is divided into positive and negative sets based on its corresponding label. By dividing all these 17 datasets, 34 sets (17 positive sets and 17 negative sets) are obtained. Then, independent random sampling is applied in each of these sets. As a result, 34 groups of data with the same number of samples are obtained from these sets. Afterward, groups with the same body fluid (positive and negative groups for a specific fluid) are merged. Finally, 17 groups of balanced data are generated for these 17 human body fluids.
By using this module, 17 groups of data are generated randomly at each training iteration. These groups of data correspond to 17 human body fluids and will be used for the calculation of other modules. When training with these data groups, the network would learn a good balance between secreted proteins and non-secreted proteins.

2.2.2. Feature Extraction

The feature extraction module is designed to extract deep features from protein sequences for all human body fluids. In multi-fluid secreted protein discovery, this module is shared by 17 human body fluids. As a result, the computation will be increased to 17 times. If the network architecture is too complex, the training process would be very slow. Therefore, the feature extraction module for multi-fluid secreted protein discovery requires a small, fast, and effective architecture.
A simple and effective architecture is adopted here, consisting of four parallel convolution-pooling operations and a fully connected layer. The input of the feature extraction module is the PSSM data with the dimension 1000 × 20 , which were collected previously. From PSSM data, the convolution-pooling operation can extract fixed-length features. Then, a fully connected layer is used to extract deeper features from the fixed-length features.
The convolution-pooling operation consists of a convolutional layer that extracts information from the input sequence and a pooling layer that selects the important information. A convolutional layer contains a group of filters, and each filter can be regarded as a single motif detector. A motif detector scans the input sequence to detect the absence of the corresponding motif. Specifically, a score about this motif is calculated at each position based on the local sequence and this detector. By calculating all the positions of input sequences, the information is extracted by motif detectors [13,24]. The calculation of motif detectors usually contains weighted summing and a non-linear activation function. The motif information corresponds to the weights and bias parameters of the filters, which can be learned in training. For computational efficiency, ReLU is used as the activation function. The computation of the convolutional layer is defined as follows:
C i , j = max ( 0 , d = ( w 1 ) / 2 ( w 1 ) / 2 c = 1 20 X i + d , c W d + ( w 1 ) / 2 , c j + b j ) ,
where X is the PSSM data of the protein, W and b are the weight and bias of the convolution layer, respectively, C is the protein information extracted by the convolution layer, w is the filter size, and max ( 0 , x ) is the ReLU activation function.
There is much useless information in the extracted information. The pooling layer is a dimensionality reduction method that can extract a specific part of it. Average pooling extracts the global information by averaging the input information. The maximum pooling extracts the important information by selecting the maximum value from the input information. The pooling size controls the size of the local area. Here, the pooling size is the length of the feature sequence, and this is global max pooling [25]. The largest regions are all used to extract the most important information for each motif detector. Global max pooling is defined as follows:
v j = max C i , j ,
where v j denotes the information selected for the j-th motif detector. The convolution-pooling operation can finally extract a fixed-size feature vector v, and the size is the number of filters in this convolutional layer.
Only features for a fixed length motif can be extracted by using a convolution-pooling operation. Four convolution-pooling operations with different filter sizes are adopted to extract features for different length motifs. These operations are used to extract from the input sequence parallelly. Four feature vectors are extracted and merged. The merged feature vector is computed as follows:
u = [ v 1 , v 2 , , v N ] ,
where v i represents the feature vector extracted by the i-th convolution-pooling operation.
A fully connected layer transforms the feature vector into more complex features. A fully connected layer consists of many neurons, each of which is connected to all input features. In each neuron, the output value is calculated by a weighted summation of the input features followed by a nonlinear transformation [13]. Similar to the convolutional layer, the ReLU activation function is used to speed up the computation. In the FC layer, the computation of the i-th neuron h i is defined as follows:
h i = max ( 0 , u · α i + d i ) ,
where α i and d i are the weight and bias of the i-th neuron, respectively. The values of all neurons in the fully connected layer constitute the final feature vector of the protein.
The feature extraction module is shared among all body fluids. Through this module, the feature vector of each protein can be extracted according to its corresponding PSSM data. Therefore, the 17 sets of proteins generated by balanced sampling are converted into 17 sets of corresponding feature vectors.

2.2.3. Muli-Task Classification

The multi-task classification module can calculate the probabilities for proteins secreted into 17 human body fluids based on features extracted by the last module. This module contains 17 output layers, and each output layer is similar to the fully connected layer. Each output layer contains two neurons representing secreted protein and non-secreted protein, and the output value is calculated by a linear transformation of the input features. The output of the i-th neuron in the k-th output layer is calculated as follows:
o i k = h · β i k + q i k ,
where β i k and q i k represent the weight and bias of the i-th neuron in the k-th output layer, respectively. After that, the softmax function transforms the values of the output layer into the secreted probabilities. The k-th probability p k corresponding to the k-th human body fluid is calculated as follows:
p k = exp o 2 k exp o 1 k + exp o 2 k .
Furthermore, when p k > 0.5 , this protein is predicted to be secreted into the k-th human body fluid. By these predicted probabilities, the overall probability for protein to be secreted into 17 human body fluids simultaneously can be computed as follows:
p ^ = k = 1 17 p k .
Cross-entropy loss is used as the loss function for secreted protein discovery in each human body fluid. This single-fluid loss is calculated based on predicted probabilities and true labels in the same human body fluid. By calculating for each of these fluids, 17 single-fluid losses for human body fluids can be obtained. The k-th single-fluid loss is calculated as follows:
L k = 1 N k i = 1 N k y i k log p i k + ( 1 y i k ) log ( 1 p i k ) ,
where y i k and p i k represent the label and predicted probability for i-th protein to be secreted into k-th human body fluid, respectively, and N k represents the number of proteins.
To discover secreted protein in 17 human body fluids, the multi-fluid loss needs to be computed based on these single-fluid losses. Here, the multi-fluid loss is computed by the weighted summation, which is defined as follows:
L = k = 1 17 λ k L k ,
where λ k represents the weight coefficient of the k-th single-fluid loss. In addition, the weight coefficients still need to be obtained. Usually, the coefficients are set by hand, but fixing coefficients may cause the network to suffer from task conflict and negative transfer [16,17,19]. Because of this, performances in some fluids may be reduced.
To ensure all the fluids can obtain a good performance, a multi-objective gradient descent (MGDA) algorithm is employed [19,26]. The MGDA algorithm can dynamically calculate weight coefficients based on gradient vectors in all human body fluids. First, 17 gradient vectors are calculated individually, each of which is the derivative of the corresponding human body fluid loss for the parameters of the feature extraction module. The k-th gradient vector g k is calculated as follows:
g k = L k θ ,
where θ contains all the weights and biases in the feature extraction module. After that, the weight coefficients are solved by finding the minimum norm point in the convex hull of these gradient vectors, which is optimized as follows:
min λ 1 , λ 2 , , λ 17 k = 1 17 λ k g k 2 2 s . t . k = 1 17 λ k = 1 , λ k 0 .
This formula is a convex quadratic problem with linear constraints, which can be easily solved by the available optimization algorithm packages. Solving this equation can obtain the weighting coefficients for these fluid losses. The multi-fluid loss for secreted protein discovery can be calculated by substituting these coefficients into this equation. By optimizing the multi-fluid loss, secreted protein predictions for 17 human body fluids can be trained simultaneously.
In the training time, the gradient vector used to update the parameters of the feature extraction module corresponds to the solution of the minimum norm point [19]. The value of this norm controls the step size of this update. When this value is not 0, the MGDA algorithm can always find a proper direction in which the updated feature extraction module can achieve better performance in all these fluids. On the other hand, because the step is very close to 0, the feature extraction can obtain a slight gradient, and the performances of all fluids will not decrease. At the same time, the output layers for different fluids can still be optimized because they only rely on their corresponding fluid. Therefore, optimizing multi-fluid loss with the MGDA algorithm can prevent these fluids from being hurt by other fluids.

2.2.4. Evaluation

The performance of secreted protein discovery is evaluated by sensitivity (SN), specificity (SP), accuracy (ACC), F1 score (F1), Matthew’s correlation coefficient (MCC), and Area under the ROC Curve (AUC). These metrics are defined as follows:
SN = T P T P + F N ,
SP = T N T N + F P ,
ACC = T P + T N T P + T N + F P + F N ,
F 1 = 2 T P 2 T P + F P + F N ,
MCC = T P × T N F N × F P ( T P + F N ) × ( T P + F P ) × ( T N + F P ) × ( T N + F N ) ,
where T P , T N , F P , and F N represent the number of protein samples corresponding to true positive, true negative, false positive, and false negative, respectively.

3. Results

3.1. Performance of MultiSec in 17 Human Body Fluids

The implementation of MultiSec is based on the Python packages PyTorch, CVXOPT, and Scikit-Learn [27,28]. MultiSec was trained on 17 sub-datasets of SecretedP17 simultaneously. First, 17 groups of data were generated by a balanced sampling module. Each group has 32 samples. Second, four parallel convolutional-pooling operations were employed. These filter sizes are {1, 3, 5, 9}, and the number of filters is 128 for all of them. With these operations, four feature vectors were extracted. After merging these vectors, one feature vector with a size of 512 was obtained. The FC layer used 64 neurons in the feature extraction module; 17 groups of features with a size of 64 were extracted. Third, the multi-task classification module calculated the predicted value for each human body fluid based on these features. Then the classification loss corresponding to each body fluid was obtained. The weight coefficients of all body fluids were optimized by the QP program in the CVXOPT package. After that, the multi-task loss was calculated by weighted summation. The multi-task classification loss for secreted protein prediction was optimized by the Adam optimizer with a learning rate of 1 × 10 4 . MultiSec was trained with 20,000 iterations, and the iterations with the highest F1 scores were selected for each body fluid through the corresponding validation dataset.
After training, MultiSec was evaluated on the testing datasets of 17 human body fluids, including plasma, saliva, urine, cerebrospinal fluid, seminal fluid, amniotic fluid, tear fluid, bronchoalveolar lavage fluid, milk, synovial fluid, nipple aspirate fluid, cervical–vaginal discharge, pleural effusion, sputum, exhaled breath condensate, pancreatic juice, and sweat. Table 2 reports the benchmarks of MultiSec on testing datasets of 17 human body fluids. MultiSec achieved performances of 81.70–98.62%, 55.38–88.37%, 85.99–99.56%, 61.08–87.06%, 54.43–76.26%, and 87.99–98.07% on ACC, SN, SP, F1, MCC, and AUC metrics, respectively. This demonstrates that MultiSec obtained impressive performances in all 17 fluids simultaneously.

3.2. Comparison with Other Methods in 14 Human Body Fluids

We compared the performances of MultiSec with other available methods, including the SVM-based approach, the RF-based approach, and DeepSec. Furthermore, we also utilized random sampling to replace the balanced sampling module in the training of MultiSec as a comparison. This model was denoted as MultiSecRS (MultiSec with random sampling). The hyper-parameters of these methods were selected by MCC metric on the validation datasets, the performances on the testing datasets were reported as benchmarks of these methods.
The SVM-based and RF-based methods are built based on protein features [6,7]. First, 1610 features (protein properties, such as sequence length, weight, amino acid composition, etc.) were collected based on protein sequences by using computational tools and websites. After that, the t-test and false discovery rate (FDR) were used to select the 50 most important features for each fluid [5]. Finally, SVM classifiers were used to predict whether a protein is secreted into a specific fluid based on these 50 features. The performances on the testing datasets were reported as SVM-based approach benchmarks. Through a similar process to the SVM-based approach, the benchmarks of the DT-based approach on all body fluids were obtained.
Unlike previous feature-based methods, DeepSec does not need feature collection and selection but performs end-to-end training via protein PSSM data [5]. The DeepSec was trained on each sub-dataset individually. A bagging-based strategy was adopted to solve the imbalance problem. First, the class with more samples was divided and then combined with this class into several datasets. Second, several DeepSec networks were separately trained on these datasets. Finally, predictions were made by averaging the probabilities obtained by these networks. In DeepSec, many networks were trained simultaneously to discover a single-fluid secreted protein. Because of this, DeepSec always costs computational time and resources. Therefore, we only trained DeepSec in 14 human body fluids where the number of networks is no more than 10, including plasma, saliva, urine, cerebrospinal fluid, seminal fluid, amniotic fluid, tear fluid, bronchoalveolar lavage fluid, milk, synovial fluid, nipple aspirate fluid, pleural effusion, sputum, and sweat. The benchmarks of DeepSec were calculated on the independent testing datasets of these 14 human body fluids.
To intuitively compare MultiSec with other approaches, the benchmarks of these methods were averaged over 14 human body fluids. Table 3 presents the average benchmarks of these approaches. From this table, either MultiSecRS or MultiSec achieves the highest scores on all metrics. Specifically, MultiSec outperforms MultiSecRS on the SN metric by 17.65%. The improvement of SN shows that a balanced sampling module can help MultiSec detect more secreted proteins. Furthermore, MultiSec outperforms DeepSec by 7.34%, 4.85%, 8.62%, 12.12%, 15.28%, and 5.36% on ACC, SN, SP, F1 and AUC metrics. These improvements demonstrate that our approach successfully exploited relationships between human body fluids and predicted more accurate secreted proteins on average metrics.
Four radar charts show the comparison benchmarks, including ACC, F1, MCC, and AUC, in 14 human body fluids. From Figure 2, MultiSec achieves the highest values on all axes of all four radar charts except the Sputum axis of ACC. Although the ACC of the DT-based approach is higher, the rest of the indicators of MultiSec outperform other approaches. The other metrics can always reveal real performances rather than accuracy in an imbalanced dataset. Because of this, MultiSec still outperforms other approaches in sputum fluid. In addition, the ACC metric of MultiSec is only slightly higher than the DT-based approach in sweat fluid. It is possible that the DT-based approach predicted more negative samples, and MultiSec barely improved performance in sputum and sweat fluids. Comparing the results in Figure 2, we can conclude that MultiSec predicts more accurate probabilities for secreted protein discovery than other approaches in all these 14 human body fluids.
We also compare the computational consumption of MultiSec and DeepSec in Table 4. This table shows that MultiSec can detect more body fluids with less time and parameters than DeepSec. Especially in the training time, MultiSec is about 30 times faster than DeepSec.

3.3. Potentially Secreted Proteins

MultiSec was also applied to identify potentially secreted proteins (PSPs) in 17 human body fluids, which are not verified by experiment but predicted as positive by our approach. We retrained MultiSec using the training dataset and validation datasets, and the testing datasets were used to choose the appropriate iteration for each fluid.
Those proteins that are neither secreted proteins nor negative samples were selected as candidates. Secreted probabilities of these candidates can be calculated by using MultiSec. When the probability of a protein in the corresponding fluid is greater than 0.5, this protein is predicted to be a secreted protein, which is also named PSP. From 17 human body fluids, 17 groups of PSPs were identified by our approach. Table 5 shows the PSP information in each fluid. The details of PSPs and their probabilities are reported in Supplementary Table S1.
Among these PSPs, 103 PSPs were found in all 17 groups, which means our approach predicted these PSPs to be secreted into all 17 human body fluids. We refer to these 103 PSPs as potentially universal secreted proteins (PUSPs). Furthermore, the probability for a protein to be secreted into all 17 human body fluids was calculated using the cumulative multiplication of all secreted probabilities. After that, 20 PUSPs were found to have a relatively high probability of being secreted into all 17 human body fluids. The information on PUSPs is reported in Table 6, and the details of all 103 PUSPs are listed in Supplementary Table S2. We believe that these 20 PUSPs are more worthwhile to discover than others because they are more likely to be simultaneously secreted in 17 human body fluids.

4. Disscussion

Comparison benchmarks presented in the previous section demonstrate that exploiting the relationships between different human body fluids can improve the discovery of secreted proteins. Furthermore, Table A5 has shown that our method outperforms DeepSec by 6.08–23.75% on the MCC metric. All these comparisons indicate that MultiSec is the current best performing method and is superior to other state-of-the-art methods in secreted protein discovery.
To discover secreted protein in these 14 human body fluids, DeepSec needs to train 56 networks (1, 3, 1, 2, 2, 3, 5, 2, 3, 6, 6, 6, 6, and 10 for plasma, saliva, urine, cerebrospinal fluid, seminal fluid, amniotic fluid, tear fluid, bronchoalveolar lavage fluid, milk, synovial fluid, nipple aspirate fluid, pleural effusion, sputum, and sweat). Therefore, DeepSec always costs many computational resources. However, our novel approach can discover secreted proteins in all 17 human body fluids by using only a single network. Furthermore, our approach also performs better than DeepSec in all 14 human body fluids. From the comparison with DeepSec, we can conclude that MultiSec improves performance in all the human body fluids and significantly reduces the number of networks and training time.

5. Conclusions

In summary, we present a novel approach MultiSec to predict whether a protein is secreted into each of 17 human body fluids based on its sequence. The new approach was designed to exploit relationships between different human body fluids via multi-task learning. Compared with other state-of-the-art approaches, the benchmarks show that MultiSec outperforms other approaches in all compared human body fluids. Furthermore, compared with DeepSec, our approach decreases many networks into only one to discover secreted protein in 17 human body fluids. Our improvements also confirm the relationships between different human body fluids exist and help to discover secreted proteins.
Afterward, MultiSec was used to identify potentially secreted proteins. With this approach, 1244–6742 potentially secreted proteins have been discovered from 17 human body fluids. Furthermore, 103 proteins are predicted to be secreted into all these fluids simultaneously. Among these proteins, 20 are reported with a relatively high probability for protein to be secreted into 17 human body fluids simultaneously. We believe these identified proteins are worthwhile for further study with biological experiments.
In the future, we would consider fusing more features, such as protein features and secondary structures, into our approach. In addition, due to the limited number of secreted proteins, computational approaches in secreted protein discovery are very easy to overfit. Therefore, a more effective network architecture is also worthwhile to discover. Furthermore, other protein prediction tasks may also be related to secreted protein discovery, such as single peptide identification, protein localization, etc. [24,29,30]. We will also find more tasks to improve secreted protein discovery.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/math10152562/s1, Table S1: Potentially secreted proteins in 17 human body fluids; Table S2: Potentially universal secreted proteins in 17 human body fluids.

Author Contributions

Conceptualization, K.H.; methodology, K.H. and Y.W.; validation, X.X. and D.S.; formal analysis, D.S.; investigation, K.H and X.X.; data curation, K.H. and D.S.; writing—original draft preparation, K.H.; writing—review and editing, K.H. and Y.W.; visualization, K.H., D.S. and X.X.; supervision, Y.W.; project administration, Y.W.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 62072212) and the Development Project of Jilin Province of China (Nos. 20200401083GX, 2020C003, 20200403172SF).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data and code that support the reported results can be found at https://sites.google.com/view/multisec (accessed on 10 June 2022).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Comparison Details with Other Methods in 17 Human Body Fluids

The main text only includes comparative average benchmarks and the radar plots of MultiSec and other approaches because of the space limitations. Here, we present all the comparison benchmarks of these methods in 17 human body fluids. Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6 show comparison benchmarks of ACC, SN, SP, F1, MCC, and AUC metrics. These tables also provides the comparison benchmarks of the rest fluids, including cervical-vaginal discharge, exhaled breath condensa, and pancreatic juice. From these tables, MultiSec achieves higher scores than the DT-based approach and the SVM-based approach. Especially on the MCC metric shown in Table A5, MultiSec outperforms the SVM-based approach by 32.59%, 43.42%, and 36.64% in cervical-vaginal discharge, exhaled breath condensa, and pancreatic juice, respectively. This also confirms that our method is more accurate than other methods.
Table A1. ACC benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Table A1. ACC benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Fluid NameDTSVMDeepSecMultiSec
Plasma/Serum0.72240.69920.83000.8682
Saliva0.78850.76810.83390.9257
Urine0.72510.73190.82180.8730
Cerebrospinal fluid0.73750.73700.80260.8296
Seminal fluid0.72970.74410.79290.8270
Amniotic fluid0.82240.80340.84870.9254
Tear fluid0.82250.77440.83210.9174
Bronchoalveolar lavage fluid0.73610.73140.83630.8801
Milk fluid0.76170.67310.81710.8979
Synovial fluid0.85370.74560.78870.9049
Nipple aspirate fluid0.86450.81730.83170.9209
Cervical-vaginal discharge0.93040.79050.9385
Pleural effusion0.83700.69490.77950.8950
Sputum0.90140.89300.82960.8863
Exhaled breath condensate0.97440.92640.9878
Pancreatic juice0.94890.93460.9577
Sweat0.89430.71650.77690.8986
The best results are in bold.
Table A2. SN benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Table A2. SN benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Fluid NameDTSVMDeepSecMultiSec
Plasma/Serum0.79330.61260.85530.8614
Saliva0.34130.75200.69640.8373
Urine0.81920.73460.83290.8630
Cerebrospinal fluid0.49140.64220.76960.7806
Seminal fluid0.61660.78980.77710.7924
Amniotic fluid0.48700.80170.83480.8817
Tear fluid0.29890.75820.76900.8614
Bronchoalveolar lavage fluid0.41820.74230.72990.8256
Milk fluid0.20040.68530.70260.7823
Synovial fluid0.20330.73110.75410.8230
Nipple aspirate fluid0.33840.75300.80180.8811
Cervical-vaginal discharge0.20000.78290.8629
Pleural effusion0.22300.74910.74220.8014
Sputum0.44250.50150.79350.7375
Exhaled breath condensate0.07690.47690.5538
Pancreatic juice0.24030.35660.8837
Sweat0.15520.76720.77160.7802
The best results are in bold.
Table A3. SP benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Table A3. SP benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Fluid NameDTSVMDeepSecMultiSec
Plasma/Serum0.62720.81570.79510.8599
Saliva0.92850.77320.87820.9528
Urine0.58720.72790.80360.9002
Cerebrospinal fluid0.89730.79860.76990.8917
Seminal fluid0.79110.71920.78770.8686
Amniotic fluid0.93300.80400.85100.9444
Tear fluid0.92290.77750.84260.9390
Bronchoalveolar lavage fluid0.87550.72670.88230.9073
Milk fluid0.93930.66920.85740.9093
Synovial fluid0.95690.74790.79110.9101
Nipple aspirate fluid0.95260.82810.83720.9352
Cervical-vaginal discharge0.98340.79100.9544
Pleural effusion0.93400.68630.79360.9009
Sputum0.98320.96270.83600.8833
Exhaled breath condensate0.99400.93620.9956
Pancreatic juice0.98420.96330.9606
Sweat0.96780.71140.78040.9104
The best results are in bold.
Table A4. F1 benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Table A4. F1 benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Fluid NameDTSVMDeepSecMultiSec
Plasma/Serum0.76630.70020.85230.8835
Saliva0.43490.60740.66860.8456
Urine0.77980.76500.84710.8911
Cerebrospinal fluid0.59580.65790.72010.7792
Seminal fluid0.61620.68470.69720.7503
Amniotic fluid0.57610.66910.73270.8550
Tear fluid0.35140.51960.59580.7670
Bronchoalveolar lavage fluid0.49140.62750.73190.8087
Milk fluid0.28790.50200.64660.7759
Synovial fluid0.27560.44030.49520.6910
Nipple aspirate fluid0.41730.54170.57740.7577
Cervical-vaginal discharge0.28000.33580.6638
Pleural effusion0.27180.40110.47510.6697
Sputum0.57580.58620.58480.6614
Exhaled breath condensate0.11360.21680.6542
Pancreatic juice0.30850.34070.6686
Sweat0.20990.32870.38630.5724
The best results are in bold.
Table A5. MCC benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Table A5. MCC benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Fluid NameDTSVMDeepSecMultiSec
Plasma/Serum0.42710.42780.65220.7323
Saliva0.33560.46860.55920.7968
Urine0.41960.45620.63450.7396
Cerebrospinal fluid0.43530.44480.58020.6410
Seminal fluid0.40760.48780.54060.6182
Amniotic fluid0.48140.54980.63900.8058
Tear fluid0.25760.42610.51730.7218
Bronchoalveolar lavage fluid0.32970.43790.61410.7220
Milk fluid0.20430.30740.52650.7119
Synovial fluid0.22320.35360.42100.6411
Nipple aspirate fluid0.35780.46710.51350.7189
Cervical-vaginal discharge0.27450.33380.6597
Pleural effusion0.19070.30900.39490.6175
Sputum0.55840.53690.51470.5981
Exhaled breath condensate0.11830.23020.6644
Pancreatic juice0.29720.30670.6731
Sweat0.17340.29160.35600.5380
The best results are in bold.
Table A6. AUC benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Table A6. AUC benchmarks of MultiSec and compared methods on the independent testing datasets of 17 human body fluids.
Fluid NameDTSVMDeepSecMultiSec
Plasma/Serum0.78230.79690.90850.9383
Saliva0.73920.83390.87750.9546
Urine0.78110.81590.90080.9462
Cerebrospinal fluid0.77210.80420.86420.8867
Seminal fluid0.77160.82380.85760.8976
Amniotic fluid0.80610.87650.92500.9654
Tear fluid0.73050.85040.89290.9493
Bronchoalveolar lavage fluid0.74600.81080.89680.9400
Milk fluid0.71110.74900.85910.9152
Synovial fluid0.70530.82410.84520.9219
Nipple aspirate fluid0.73540.86050.88120.9644
Cervical-vaginal discharge0.70790.86810.9715
Pleural effusion0.61840.78830.84040.9068
Sputum0.80040.82730.89110.9177
Exhaled breath condensate0.62550.78710.9276
Pancreatic juice0.73350.88090.9810
Sweat0.73970.81910.84700.9340
The best results are in bold.

References

  1. Lathrop, J.T.; Anderson, N.L.; Anderson, N.G.; Hammond, D.J. Therapeutic potential of the plasma proteome. Curr. Opin. Mol. Ther. 2003, 5, 250–257. [Google Scholar] [PubMed]
  2. Anderson, N.L. The Clinical Plasma Proteome: A Survey of Clinical Assays for Proteins in Plasma and Serum. Clin. Chem. 2010, 56, 177–185. [Google Scholar] [CrossRef] [PubMed]
  3. Shen, F.; Zhang, Y.; Yao, Y.; Hua, W.; Zhang, H.S.; Wu, J.S.; Zhong, P.; Zhou, L.F. Proteomic analysis of cerebrospinal fluid: Toward the identification of biomarkers for gliomas. Neurosurg. Rev. 2014, 37, 367–380. [Google Scholar] [CrossRef]
  4. Huang, L.; Shao, D.; Wang, Y.; Cui, X.; Li, Y.; Chen, Q.; Cui, J. Human body-fluid proteome: Quantitative profiling and computational prediction. Brief. Bioinform. 2021, 22, 315–333. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Shao, D.; Huang, L.; Wang, Y.; He, K.; Cui, X.; Wang, Y.; Ma, Q.; Cui, J. DeepSec: A deep learning framework for secreted protein discovery in human body fluids. Bioinformatics 2021, 38, 228–235. [Google Scholar] [CrossRef]
  6. Cui, J.; Liu, Q.; Puett, D.; Xu, Y. Computational prediction of human proteins that can be secreted into the bloodstream. Bioinformatics 2008, 24, 2370–2375. [Google Scholar] [CrossRef]
  7. Wang, Y.; Du, W.; Liang, Y.; Chen, X.; Zhang, C.; Pang, W.; Xu, Y. PUEPro: A Computational Pipeline for Prediction of Urine Excretory Proteins. In Proceedings of the 12th Advanced Data Mining and Applications, Gold Coast, QLD, Australia, 12–15 December 2016; Volume 10086 LNAI, pp. 714–725. [Google Scholar]
  8. Wang, J.; Liang, Y.; Wang, Y.; Cui, J.; Liu, M.; Du, W.; Xu, Y. Computational Prediction of Human Salivary Proteins from Blood Circulation and Application to Diagnostic Biomarker Identification. PLoS ONE 2013, 8, e80211. [Google Scholar] [CrossRef]
  9. Sun, Y.; Du, W.; Zhou, C.; Zhou, Y.; Cao, Z.; Tian, Y.; Wang, Y. A Computational Method for Prediction of Saliva-Secretory Proteins and Its Application to Identification of Head and Neck Cancer Biomarkers for Salivary Diagnosis. IEEE Trans. Nanobiosci. 2015, 14, 167–174. [Google Scholar] [CrossRef]
  10. Hu, L.L.; Huang, T.; Cai, Y.D.; Chou, K.C. Prediction of Body Fluids where Proteins are Secreted into Based on Protein Interaction Network. PLoS ONE 2011, 6, e22989. [Google Scholar] [CrossRef] [Green Version]
  11. Apweiler, R. The Universal Protein Resource (UniProt) in 2010. Nucleic Acids Res. 2010, 38, D142–D148. [Google Scholar]
  12. Rao, H.B.; Zhu, F.; Yang, G.B.; Li, Z.R.; Chen, Y.Z. Update of PROFEAT: A web server for computing structural and physicochemical features of proteins and peptides from amino acid sequence. Nucleic Acids Res. 2011, 39, W385–W390. [Google Scholar] [CrossRef] [Green Version]
  13. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  14. Caruana, R. Multitask Learning. Mach. Learn. 1997, 28, 41–75. [Google Scholar] [CrossRef]
  15. Zhang, Y.; Yang, Q. A Survey on Multi-Task Learning. IEEE Trans. Knowl. Data Eng. 2021, 1. [Google Scholar] [CrossRef]
  16. Cipolla, R.; Gal, Y.; Kendall, A. Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake Organization, Salt Lake, UT, USA, 18–22 June 2018; pp. 7482–7491. [Google Scholar]
  17. Chen, Z.; Badrinarayanan, V.; Lee, C.Y.; Rabinovich, A. GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Volume 2, pp. 1240–1251. [Google Scholar]
  18. Lin, X.; Zhen, H.L.; Li, Z.; Zhang, Q.; Kwong, S. Pareto Multi-Task Learning. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
  19. Sener, O. Multi-Task Learning as Multi-Objective Optimization. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 525–536. [Google Scholar]
  20. Shao, D.; Huang, L.; Wang, Y.; Cui, X.; Li, Y.; Wang, Y.; Ma, Q.; Du, W.; Cui, J. HBFP: A new repository for human body fluid proteome. Database 2021, 2021, 1–14. [Google Scholar] [CrossRef]
  21. Huang, Y.; Niu, B.; Gao, Y.; Fu, L.; Li, W. CD-HIT Suite: A web server for clustering and comparing biological sequences. Bioinformatics 2010, 26, 680–682. [Google Scholar] [CrossRef]
  22. Altschul, S. Gapped BLAST and PSI-BLAST: A new generation of protein database search programs. Nucleic Acids Res. 1997, 25, 3389–3402. [Google Scholar] [CrossRef] [Green Version]
  23. A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explor. Newsl. 2004, 6, 20–29. [CrossRef]
  24. Savojardo, C.; Martelli, P.L.; Fariselli, P.; Casadio, R. DeepSig: Deep learning improves signal peptide detection in proteins. Bioinformatics 2018, 34, 1690–1696. [Google Scholar] [CrossRef] [Green Version]
  25. Kim, Y. Convolutional Neural Networks for Sentence Classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Stroudsburg, PA, USA, 25–29 October 2014; pp. 1746–1751. [Google Scholar]
  26. Désidéri, J.A. Multiple-gradient descent algorithm (MGDA) for multiobjective optimization. C. R. Math. 2012, 350, 313–318. [Google Scholar] [CrossRef]
  27. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
  28. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Müller, A.; Nothman, J.; Louppe, G.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  29. Standley, T.; Zamir, A.; Chen, D.; Guibas, L.; Malik, J.; Savarese, S. Which tasks should be learned together in multi-task learning? In Proceedings of the 37th International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 9120–9132. [Google Scholar]
  30. Almagro Armenteros, J.J.; Sønderby, C.K.; Sønderby, S.K.; Nielsen, H.; Winther, O. DeepLoc: Prediction of protein subcellular localization using deep learning. Bioinformatics 2017, 33, 3387–3395. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The architecture of MultiSec to discover secreted proteins by sequence: (a) the balanced sampling generates balanced data for each fluid; (b) feature extraction adapts four convolution-pooling operations and a fully-connected layer; (c) the multi-task classification contains 17 output layers to calculate 17 human body fluids probabilities.
Figure 1. The architecture of MultiSec to discover secreted proteins by sequence: (a) the balanced sampling generates balanced data for each fluid; (b) feature extraction adapts four convolution-pooling operations and a fully-connected layer; (c) the multi-task classification contains 17 output layers to calculate 17 human body fluids probabilities.
Mathematics 10 02562 g001
Figure 2. Radar charts of comparative benchmarks on the independent testing datasets corresponding to 14 human body fluids. Higher is better: (a) radar chart of accuracy in 14 human body fluids; (b) radar chart of F1 score in 14 human body fluids; (c) radar chart of Matthew’s correlation coefficient in 14 human body fluids; (d) radar chart of area under the ROC Curve in 14 human body fluids.
Figure 2. Radar charts of comparative benchmarks on the independent testing datasets corresponding to 14 human body fluids. Higher is better: (a) radar chart of accuracy in 14 human body fluids; (b) radar chart of F1 score in 14 human body fluids; (c) radar chart of Matthew’s correlation coefficient in 14 human body fluids; (d) radar chart of area under the ROC Curve in 14 human body fluids.
Mathematics 10 02562 g002
Table 1. The numbers of proteins and the ratio of positive and negative samples in each human body fluid of the SecretedP17 dataset.
Table 1. The numbers of proteins and the ratio of positive and negative samples in each human body fluid of the SecretedP17 dataset.
Fluid NameNotationPositiveNegativeAllRatio
Plasma/SerumPlasma6530485611,3860.74
SalivaSaliva2521804810,5693.19
UrineUrine6972476011,7320.68
Cerebrospinal fluidCSF4082628110,3631.54
Seminal fluidSeminal3929723011,1591.84
Amniotic fluidAmniotic2876872511,6013.03
Tear fluidTear1843959711,4405.21
Bronchoalveolar lavage fluidBALF32417392106332.28
Milk fluidMilk2324733396573.16
Synovial fluidSynovial1525962411,1496.31
Nipple aspirate fluidNAF1640980011,4405.98
Cervical-vaginal dischargeCVF87712,0621293913.75
Pleural effusionPE1437908710,5246.32
SputumSputum1696951511,2115.61
Exhaled breath condensateEBC32614,9031522945.71
Pancreatic juicePJ64612,95713,60320.06
SweatSweat116211,66012,82210.03
Table 2. Benchmarks of MultiSec on the independent testing datasets of 17 human body fluids.
Table 2. Benchmarks of MultiSec on the independent testing datasets of 17 human body fluids.
Fluid NameACCSNSPF1MCCAUC
Plasma/Serum0.86820.86140.85990.88350.73230.9383
Saliva0.92570.83730.95280.84560.79680.9546
Urine0.87300.86300.90020.89110.73960.9462
Cerebrospinal fluid0.82960.78060.89170.77920.64100.8867
Seminal fluid0.82700.79240.86860.75030.61820.8976
Amniotic fluid0.92540.88170.94440.85500.80580.9654
Tear fluid0.91740.86140.93900.76700.72180.9493
Bronchoalveolar lavage fluid0.88010.82560.90730.80870.72200.9400
Milk fluid0.89790.78230.90930.77590.71190.9152
Synovial fluid0.90490.82300.91010.69100.64110.9219
Nipple aspirate fluid0.92090.88110.93520.75770.71890.9644
Cervical-vaginal discharge0.93850.86290.95440.66380.65970.9715
Pleural effusion0.89500.80140.90090.66970.61750.9068
Sputum0.88630.73750.88330.66140.59810.9177
Exhaled breath condensate0.98780.55380.99560.65420.66440.9276
Pancreatic juice0.95770.88370.96060.66860.67310.9810
Sweat0.89860.78020.91040.57240.53800.9340
Table 3. Comparative average benchmarks of MultiSec and other approaches on the independent testing datasets of 14 human body fluids.
Table 3. Comparative average benchmarks of MultiSec and other approaches on the independent testing datasets of 14 human body fluids.
MethodACCSNSPF1MCCAUC
DT0.79980.41630.87830.47500.34300.7457
SVM0.75210.71580.76770.57370.42600.8201
DeepSec0.81590.77360.82190.64360.53310.8777
MultiSecRS0.91200.64560.95640.73370.68280.9254
MultiSec0.88930.82210.90810.76490.68590.9313
The best results are in bold. MultiSecRS denotes MultiSec with random sampling.
Table 4. Comparison of MultiSec and DeepSec on computational resources.
Table 4. Comparison of MultiSec and DeepSec on computational resources.
MethodNumber of Body FluidsNumber of NetworksNumber of Parameters (K)Training Time (h)
DeepSec145668.37 × 5633.16
MultiSec17140.321.11
Table 5. The numbers and proportions of potentially secreted proteins in 17 human body fluids.
Table 5. The numbers and proportions of potentially secreted proteins in 17 human body fluids.
Fluid NameNumber of PSPNumber of CPRatio of PSP
Plasma/Serum515486910.59
Saliva508395530.53
Urine459082800.55
Cerebrospinal fluid535697140.55
Seminal fluid674290490.75
Amniotic fluid418986070.49
Tear fluid377487770.43
Bronchoalveolar lavage fluid603995380.63
Milk fluid540310,5680.51
Synovial fluid410690850.45
Nipple aspirate fluid422488220.48
Cervical-vaginal discharge124473390.17
Pleural effusion476797440.49
Sputum512090270.57
Exhaled breath condensate127150920.25
Pancreatic juice230166910.34
Sweat338474510.45
PSP deontes potentially secreted proteins. CP denotes candidate proteins.
Table 6. Potentially universal secreted proteins in 17 human body fluids with an overall probability greater than 90%.
Table 6. Potentially universal secreted proteins in 17 human body fluids with an overall probability greater than 90%.
IdAccessionOverall Probability ( p ^ )
1Q9H2R50.9909
2Q96P150.9908
3Q9Y5K20.9844
4Q86WD70.9790
5Q4G0T10.9673
6Q96PF10.9622
7P498630.9575
8P125440.9568
9P0DOX40.9537
10P063150.9530
11A0A0C4DH390.9496
12A0A0G2JMI30.9463
13P511240.9453
14Q8IXH80.9392
15P239460.9330
16O959320.9280
17Q7Z4100.9224
18Q9H1140.9131
19Q0Z7S80.9068
20A0A0B4J1Z20.9021
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

He, K.; Wang, Y.; Xie, X.; Shao, D. MultiSec: Multi-Task Deep Learning Improves Secreted Protein Discovery in Human Body Fluids. Mathematics 2022, 10, 2562. https://doi.org/10.3390/math10152562

AMA Style

He K, Wang Y, Xie X, Shao D. MultiSec: Multi-Task Deep Learning Improves Secreted Protein Discovery in Human Body Fluids. Mathematics. 2022; 10(15):2562. https://doi.org/10.3390/math10152562

Chicago/Turabian Style

He, Kai, Yan Wang, Xuping Xie, and Dan Shao. 2022. "MultiSec: Multi-Task Deep Learning Improves Secreted Protein Discovery in Human Body Fluids" Mathematics 10, no. 15: 2562. https://doi.org/10.3390/math10152562

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop