Next Article in Journal
Obesity and Male Reproduction: Do Sirtuins Play a Role?
Next Article in Special Issue
More Is Not Always Better: Local Models Provide Accurate Predictions of Spectral Properties of Porphyrins
Previous Article in Journal
Molecular Mechanisms Leading from Periodontal Disease to Cancer
Previous Article in Special Issue
Analysis of Integrin αIIb Subunit Dynamics Reveals Long-Range Effects of Missense Mutations on Calf Domains
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

TLCrys: Transfer Learning Based Method for Protein Crystallization Prediction

1
College of Computer Science, Nankai University, Tianjin 300350, China
2
College of Artificial Intelligence, Nankai University, Tianjin 300350, China
*
Author to whom correspondence should be addressed.
Int. J. Mol. Sci. 2022, 23(2), 972; https://doi.org/10.3390/ijms23020972
Submission received: 13 December 2021 / Revised: 5 January 2022 / Accepted: 14 January 2022 / Published: 16 January 2022
(This article belongs to the Collection Feature Papers in Molecular Informatics)

Abstract

:
X-ray diffraction technique is one of the most common methods of ascertaining protein structures, yet only 2–10% of proteins can produce diffraction-quality crystals. Several computational methods have been proposed so far to predict protein crystallization. Nevertheless, the current state-of-the-art computational methods are limited by the scarcity of experimental data. Thus, the prediction accuracy of existing models hasn’t reached the ideal level. To address the problems above, we propose a novel transfer-learning-based framework for protein crystallization prediction, named TLCrys. The framework proceeds in two steps: pre-training and fine-tuning. The pre-training step adopts attention mechanism to extract both global and local information of the protein sequences. The representation learned from the pre-training step is regarded as knowledge to be transferred and fine-tuned to enhance the performance of crystalization prediction. During pre-training, TLCrys adopts a multi-task learning method, which not only improves the learning ability of protein encoding, but also enhances the robustness and generalization of protein representation. The multi-head self-attention layer guarantees that different levels of the protein representation can be extracted by the fine-tuned step. During transfer learning, the fine-tuning strategy used by TLCrys improves the task-specialized learning ability of the network. Our method outperforms all previous predictors significantly in five crystallization stages of prediction. Furthermore, the proposed methodology can be well generalized to other protein sequence classification tasks.

1. Introduction

The functions of a protein are largely determined by its three-dimensional structure. Therefore, analyzing the three-dimensional structure of proteins is of great significance for understanding the molecular mechanism of biological processes and studying the pathogenesis mechanism of diseases. Furthermore, it can also provide key information for the development and design of drugs for human diseases. At present, existing methods used to identify the three-dimensional structure of protein sequences are electron microscopy [1], Nuclear Magnetic Resonance (NMR) spectroscopy [2], and X-ray diffraction crystallography (X-ray diffraction measurement, XRD) [3]. Compared with NMR and electron microscopy, XRD has the advantages of easy implementation, short execution time, and low research cost. Therefore, XRD has become the most popular method at present, about 80% of protein structures in protein data bank (PDB) are obtained by XRD. In the experiments of protein crystallization, the main concern is on the importance of performing XRD experiment on a crystallizable protein. However, only 2–10% of proteins can produce diffraction-quality crystals [4,5,6]. Hence, experimenting with X-ray diffraction crystallography for proteins that cannot crystallize at the current experimental level, are costly and time-consuming. Therefore, it is important to develop accurate and efficient computational methods of forecasting whether a protein can crystallize at the current experimental level or not.
Early-stage computational methods are based on classical machine learning or statistical algorithms and mainly focused on feature extraction of protein sequences, such as OBScore [7], ParCrys [8], CrystalP2 [9], XtalPred [10], PPCPred [11], SCMCRYS [12], SVMCRYS [13], PredPPCrys I & II [14], Crysalis I & II [15]. These methods can be simply regarded as two-stage classification: (i) physio-chemical and structural feature selection and extraction, and (ii) classification with different machine learning algorithms using the extracted features. However, all of these methods require fussy physio-chemical and structural feature selection from the raw protein sequences, thus the performances of these methods are dependent on the quality of feature extraction. Hence, these models lack generalization and robustness.
Deep learning has been widely applied in bioinformatics [16,17,18]. Recently, some remarkable end-to-end deep learning frameworks [19,20] have been utilized for crystallization prediction. In comparison with traditional machine learning algorithms, the above methods integrate representation learning and model training in a unified architecture and does not need to extract features before modeling. In addition to the architectural advantages of the task design, existing deep learning models require a lot of labeled data to learn the information. However, compared with the protein database, the amount of crystallization label data for protein sequences is not large enough. Therefore, the performances of these models are still not satisfactory in real world applications.
Transfer learning [21] defines two domains: source domain S and target domain T. Learning task on source domain T S helps to improve the performance of learning task on target domain T T by transferring the T S -learning knowledge to T T . Knowledge learned from source domain can significantly enhance the robustness and generalization of target task. Recently, deep learning models with pre-training, such as Transformers [22] and BERT [23], have achieved remarkable success on natural language processing tasks. These models include two steps. Firstly, pre-training is adopted to initialize the network weights and learn the representations. Secondly, the downstream task is performed based on the pre-training step. Since protein sequences have many similarities with natural language, these transfer learning models are also suitable for modeling biological sequences. Unlabelled protein sequences implicitly contain significant structural and functional information. As a result, these pre-trained tasks learn representations [24,25] of protein, which can be used for transfer learning and help to achieve good performance in downstream tasks such as secondary structure prediction and interaction prediction [24,26].
In order to overcome the problems of insufficient labeling training data and inaccurate model prediction results, and to explore the internal correlation between protein sequence modeling and crystallization propensity, we propose a novel transfer learning based method for protein crystallization prediction, called TLCrys. The predictor consists of two models and processes, a protein representation pre-training step through a self-supervised multi-task model on protein sequence and Gene Ontology (GO) annotations, and a multi-head self-attention fine-tuning step on protein crystallization dataset with pre-trained parameters. In summary, our main contributions of this paper are as follows.
  • As far as we know, this is the first time that transfer learning is applied to the protein crystallization prediction task based on the protein sequences. Compared with traditional machine learning models, our model is more interpretable and our predictions are more accurate.
  • Since protein sequences generally contain complex spatial structure, we use global attention module and multi-task learning to pre-train the self-supervised model of protein sequences.
  • In fine-tuning step, we apply a multi-head self-attention layer to extract the different levels of protein features from the global representation space during pre-training step.
In order to verify the effectiveness of the protein representations of pre-training and fine-tuning steps, we design an end-to-end direct learning pipeline for comparison of TLCrys. We combine the pre training model and fine-tuning model, and use the crystallization data to train directly. Experiments indicate that TLCrys is superior to current state-of-the-art models. Besides, in order to prove the effectiveness and robustness of our model, we also carry out ablation experiments and case study.

2. Materials and Methods

2.1. Overview of Model

The training process of TLCrys model for protein crystallization prediction consists of two parts.
  • Learning task on source domain: self-supervised pre-training step for protein representation on protein sequences and Gene Ontology annotations.
  • Learning task on target domain: supervised fine-tuning step on protein crystallization dataset with pre-trained parameters.
The representations learned in pre-training can be regarded as the knowledge of source domain. Transferring the representation as the input of fine-tuning step helps to improve the performance of the target task, i.e. protein crystalization prediction. The key component of these parts is the attention module.

2.2. The Attention Module

The attention module used in the pre-training process of TLCrys model was proposed by Brandes [27]. This attention module is similar to the self-attention module used in the Transformer model [22]. It is well known that protein structure is an extremely complex spatial structure that loses its spatial characteristics when it is expanded into a secondary structure. Two residues that are spatially close to each other are far apart in a two-dimensional sequence. So when we construct protein representations, we use attention modules to focus on global characteristics as well as local characteristics. As illustrated in Figure 1, the architecture of the attention module consists of dual parallel paths, one proceeds locally and the other proceeds globally. The local representations collected in the first path are 3D tensors of shape B × L × d l o c a l where B is the batch size, L is the sequence length in each mini-batch, and d l o c a l is the dimension of the local representations. The second path produces 2D tensors of shape B × d g l o b a l as global representations.
For the local representation path, the input sequences pass through two different types of 1D convolution layers, dilated convolution and classical convolution layer. Meanwhile, for the global representation path, the input annotations pass through the fully-connected layer, and then the broadcast layer. The broadcast layer is a fully-connected layer that transforms the d g l o b a l -channel global features into d l o c a l -channel local features. Then, they are added up to the local representation path. Based on this structure, the global representations influence the local representations. These outputs are summed to the inputs of the regularization layer and then pass through the same structure again to generate the global representations. The local representations influence the global representations by global attention block.
We use global attention mechanism as special self-attention mechanism which focuses on position information of each other through each position vector, whereas the global attention quantifies position-wise attention weights to the local input features according to the global input vector. The global attention modules have two independent kinds of inputs, a global representation vector x R d g l o b a l , and a local representation vector R ( S 1 , S 2 , , S R R d l o c a l ) . It outputs a global representation vector y ( y R d v a l u e ) , which can be calculated as:
y = i = 1 R z i v i ,
where z is the attention values defined by:
z 1 , z 2 , z L = S o f t m a x q , k i d k e y
The global query vectors q, the positional key vectors k i and local value v are three different vectors obtained by three linear transformations W q , W k , W v functions separately:
q = t a n h ( W q x ) , k i = t a n h ( W k x ) , v i = σ ( W v S i ) ,
where W k R d k e y × d l o c a l , W q R d k e y × d g l o b a l , W v R d v a l u e × d l o c a l are trainable parameters and d k e y is the scaling factor. σ denotes the sigmoid function.
To extract multi-view information in multiple representation sub-spaces simultaneously, a multi-head attention mechanism is applied, which aggregates features from multiple heads in parallel and aggregates these head outputs. Note that d v a l u e = d g l o b a l / n h e a d , where n h e a d is the total number of the attention heads. Compared with self attention mechanism, global attention mechanism has the following advantages.
  • We use dilated convolution to expand the receptive field and capture multi-scale context information.
  • Global attention regards global inputs as the key vector. Therefore, the number of computation parameters performed by the model grows only linearly with the sequence length, while the standard self-attention calculates a position-to-position consistency matrix and the amount of parameters grows quadratically. This linear growth is also applicable to the computation complexity and the memory consumption of the model.
  • Owing to the informational interaction between the local and global representations, we can obtain not only the local features of adjacent amino acids, but also the global features of protein sequences
  • The model, based on the convolutional and global attention layers, is more efficient and stable than that relying on recurrent layers.
As illustrated in Figure 2a, we pre train the protein representation through the self-supervised dual-task method by protein sequence and Gene Ontology (GO) annotations, We use 26 characters and some special tags to represent the protein sequence, including 20 standard amino acids, Selenocysteine (U), an undefined amino-acid (X). Specially, we use OTHER to represent one of the other amino acids, and every sequence starts with START token and ends with END tags. Sequences shorter than the sequence length of the mini-batch are padded with PAD tags. GO annotations of each sequence are encoded as binary vectors.
The pre-training step of TLCrys is based on the global attention module. In the local path, the embedding layer can map the input protein sequences into the local representation vectors R d l o c a l . In the global path, we use a fully-connected layer to transfer the input binary annotations into global representation vectors R d g l o b a l . The pre- training model includes six attention modules in series. The global representation and local representation of the previous module’s output are input into the next attention module to obtain a new global representation and local representation.
The pre-training dataset on protein sequences and GO annotations are extracted from UniRef90 (https://www.uniprot.org/uniref/) (accessed on 10 December 2021). To enhance the robustness of the model, we randomly replace 5% tokens with other tokens as noise. Furthermore, the inputted GO annotations are added with random noise by removing existing annotations by 25% probability or totally removed by 50%. In summary, the pre-training step is a dual-task where the model is expected to simultaneously recover both the protein sequence and its known GO annotations. The hyperparameter settings of the pre-training step are shown in Table 1.

2.3. Pre-Training

The loss function of the pre-training step consists of two parts: the categorical cross-entropy over the protein sequences and the binary cross-entropy over the GO annotations. It is defined as follows.
L = i = 1 l S i · l o g ( S ^ i ) j = 1 N ( A j · l o g ( A ^ j ) + ( 1 A j ) · l o g ( ( 1 A ^ j ) ) ) ,
where l is the sequence length, S i 1 , , 26 is the sequence real tag at position i, S ^ i 0 , 1 is the predicted probability at position i. N is the the number of annotations, A j 0 , 1 is the sequence real annotations at position j, A ^ j 0 , 1 is the predicted probability of annotations j.

2.4. Fine-Tuning

As depicted in Figure 2, we extract every pair of global representations from each attention module and concatenate them with the output global representation as whole global representation L = c o n c a t e n a t e [ L 1 , L 2 L n ] . We designe a multi-head self-attention layer to extract the important information from each representation.
The self-attention mechanism transforms the concatenated feature vector L into three feature vectors, the Query (Q), Key (K), and Value (V) by three different linear mapping functions. As depicted in Figure 2, the weight assigned to each value is calculated as the dot-product of the query with the corresponding key:
A t t e n t i o n ( Q , K , V ) = S o f t m a x Q K T d k V ,
where d k is the scaling factor, d k is the dimension of the vector K, and T is the transpose operation. This operation is also called scaled dot-product attention [22]. The Q, K and V are obtained by three linear transformations with the same input separately:
Q = L W Q , K = L W K , V = L W V ,
where W Q , W K , W V R d L × d k are trainable parameters and d L is the dimension of feature map.
To extract multi-view information in multiple representation sub-spaces simultaneously, a multi-head attention mechanism is applied, where each head is an independent scaled dot-product [22] attention module:
h e a d i = A t t e n t i o n ( Q W i Q , K W i K , V W i V ) ,
M u l t i ( Q , W , V ) = C o n c a t ( h e a d i , , h e a d h ) W o ,
where Q W i Q , K W i K , V W i V R D × d k i are the linear transformation parameters same as those in Equation (3) and W o is the linear transformation parameters for aggregating the extracted information from different heads. Note that d k i = d k / h , where h is the total number of the attention heads, here h = 6 .
Our model adopts binary cross entropy loss function with a l 2 normalization. The regularized objective function L ( θ ) is calculated as follows.
L ( θ ) = i = 1 N [ y i l o g y ^ i + ( 1 y i ) l o g ( 1 y ^ i ) ] + λ θ 2 2 .
Here y ^ i represents the predicted label of nth protein sequence, y i represents its corresponding crystallization prediction label, and N is the size of the training set, λ denotes a hyperparameter of l 2 regularization, and θ denotes all parameters of the model.
The model is trained using the Adam [28] optimizer, with mini-batch gradient descent to minimize the objective function. Initially, parameters of all layers in the pre-trained model are frozen, and only the newly added fully-connected layer is trained for up to 40 epochs. Then, all parameters are unfrozen and trained for up to 40 additional epochs. At the end, we train the model for 1 final epoch with a larger sequence length. Throughout all epochs, we reduce the learning rate on plateau and apply early stopping based on an independent validation set.
Early stopping is a useful learning strategy to overcome overfitting during training. Specifically, when the validation loss is no longer decreasing for limited epochs, the training procedure terminates. The models of all epochs are then evaluated on the validation set and the one with the best performance is chosen as the final prediction model.

2.5. Direct Learning

In order to verify the effectiveness of the protein representation in pre-training and fine-tuning steps, we build an end-to-end pipeline that combines the two steps. To compare with the transfer learning process of TLCrys, the hyperparameter settings of the direct learning model are consistent with those of the previous model. The training strategy of direct learning also remains unchanged compared with the fine-tuning process. We construct an all-zero vector as the GO annotation input since our crystallization dataset lacks the GO annotations.

3. Results

3.1. Dataset and Environment

As we can see in Table 2, the experimental dataset of this paper is from the PredPPCrys [14] model (https://doi.org/10.1371/journal.pone.0105902.s007) (accessed on 10 December 2021), which includes five datasets in the form of FASTA. Each entry of the FASTA data consists of a protein residue sequence and a tag. The five corresponding tags include Sequence Cloning failed, Production of protein material failed, Purification failed, Crystallization failed, and Crystallizable. They respectively indicate final states of protein crystallization experiments.
The types of tags indicate the propensity of protein in different stages of crystallization: the protein cloning failure (CLF), the production of protein material failure (MF), protein purification failure (PF), and crystallization failure (CF) and crystallographic (CRYs) of final diffraction mass. In CLF task, only the cloning failed tag is negative. In MF task, Sequence Cloning failed, production of protein material failed are negative. In protein material production tendency task (MF), Sequence Cloning failed, production of protein material failed are negative. In the task of purification tendency (PF), purification failed and crystallization failed are negative, crystallizable is positive, and crystallizable is negative in the task of final diffraction mass crystallization. The numbers of training sets and test sets are shown in Table 2, in which the sequence similarity of test sets is 40 % .
Our codes are implemented in Tensorflow which is a powerful deep learning framework. Trainable weight matrices of TLCrys are initialized by the default setting. TLCrys is trained on a single NVIDIA GeForce GTX 3070 GPU with 8GB memory.

3.2. Metrics

We evaluate our model on the dataset described in Section 3.1. Accuracy (ACC), Matthew’s Correlation Coefficient (MCC), balanced F1 Score, sensitivity (SENS), specificity (SPEC) and precision(PRE) are commonly used metrics for binary classification. All of them are based on the number of true positive (TP), true negative (TN), false positive (FP), and false negative (FN). They are defined as follows.
A c c u r a c y ( A C C ) = T P + T N T P + T N + F P + F N
S p e c i f i c i t y ( S P E C ) = T N T N + F P
S e n s i t i v i t y ( S E N ) = T P T P + F N
P r e c i s i o n ( P R E ) = T P T P + F P
F 1 S c o r e = 2 P r e c i s i o n S e n s i t i v i t y P r e c i s i o n + S e n s i t i v i t y
M C C = T P T N F P F N ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N )
Accuracy is the most common evaluation metric, which means the proportion of correctly classified samples in all samples. Specificity is the ratio of correctly predicted negative samples. Sensitivity denotes the ratio of the positive samples that were correctly predicted. Precision represents the proportion of samples that are classified as positive samples that are actually positive samples. F1-score is a comprehensive evaluation metric and the harmonic average of precision and sensitivity. Matthew’s Correlation Coefficient is a remarkable metric in binary classification problem on imbalanced data [29].
Furthermore, the receiver operating characteristic (ROC) curve denotes the classification performance of a model by plotting the true-positive (TP) rate against the false-positive (FP) rate. TP rate and FP rate change when the different discrimination thresholds are selected. The area under ROC curve (AUR) is an important indicator for measuring the classification performance of a model.

3.3. Comparison with Other Methods

We evaluate predictive performances of 13 models based on five stages of crystallization prediction dataset. They are OBScore [7], XtalPred [10], CrystalP2 [9], Crysalis I & II [15], PredPPCrys I & II [14], ParCrys [8], TargetCrys [30], PPCPred [11], SVMCRYS [13], SCMCRYS [12] and DeepCrystal [19]. Here, only Crysalis I & II model and PredPPCrys I & II model can predict the classification results of the five stages. The five-stage classification of protein crystallization is specifically the sequence failing to clone (CLF), protein material production failed (MF), protein purification failure (PF), and crystallization failure (CF) and diffraction-quality crystals (CRYs). According to the statistical data in Table 3, the performance of TLCrys in the five-stage protein crystallization prediction is better than all the previous predictors. Some metrics, such as SPEC or SEN, are related to the classification tendency and do not represent the comprehensive performance of the classifier. In terms of comprehensive metrics such as AUC, MCC, and F1-Score, TLCrys is significantly better than other compared models.

3.4. Ablation Experiments

In order to verify the effectiveness of the multi-head self-attention layer in fine-tuning module and select the number of heads required, we conduct ablation experiments. Firstly, we remove the multi-head self-attention layer and directly send the concatenated whole representations into the fully-connected layer for the output layer. Then, we set up different numbers of heads to determine the best number of heads. As we can see from Table 4, when we set a 6-head self-attention layer, the model has the best performance in comprehensive metrics. From the experimental results, we can conclude that the multi-head self-attention layer of TLCrys can extract important information from each representation, which can improve the accuracy and robustness of the model.

3.5. Case Study

Transcription factors are defined as some sequence-specific proteins, which can regulate many essential biological processes. Sox transcription factors consist of highly conserved high-mobility group (HMG) domain of 70∼80 amino acids [31]. In Sox transcription factor family, Sox9 is a gene that can target several important organs, such as the brain, heart, kidney, and bone. Sox17 can participate in endoderm differentiation in early mammalian development [20]. Five protein sequences of Sox9 and Sox17 are applied to validate the performance of TLCrys and other predictors. The recent research has illustrated that Sox9 HMG, Sox17 HMG and Sox17EK HMG are competent to conduct diffraction-quality crystallization, while it is not evident that full-length sequences of Sox9 and Sox17 (i.e., Sox9 FL and Sox17 FL) are competent to produce diffraction-quality crystals [19,31,32]. Therefore, an excellent protein crystallization predictor should output low probability scores while processing Sox9 FL and Sox17 FL, and output high probability scores while processing Sox9 HMG, Sox17 HMG and Sox17EK HMG.
Table 5 shows the predicted probability score of the TLCrys and other predictors for Sox transcription factor proteins. Here we use 0.5 as a threshold, if the score is more than 0.5, the protein is predicted to be crystalizable. TLCrys and DeepCrystal correctly identifies all the proteins that are able to produce diffraction-quality crystals. However, compared with DeepCrystal, TLCrys achieves lower probability prediction scores while processing full length sequences of Sox9 and Sox17, and achieves higher probability prediction scores while processing Sox9 HMG, Sox17 HMG and Sox17EK HMG. The results suggest that TLCrys is a more credible protein crystallization predictor compared with current predictors.

4. Conclusions

Crystallization prediction of protein is a significant task in computational biology. Due to the profound relationship between protein crystallization and protein structure, we design a novel transfer learning based method for protein crystallization prediction, named TLCrys. In source domain, TLCrys adopts a multi-task training method in pre-training procedure to obtain global and local information of protein sequences for protein representation learning. In target domain, the representations are regarded as knowledge from the source domain to enhance the fine-tuning model for protein crystalization prediction. Besides, the multi-head self-attention mechanism is adopted in fine-tuning. Through the comparative experiment of transfer learning and direct learning, the profound relationship between protein crystallization and protein structure is revealed. The ablation experiments demonstrate the effectiveness and number of attention modules in the fine-tuning model. Case study validate the capability of our method for protein crystallization prediction. The experiments demonstrate that our method significantly outperforms other methods on five crystallization stages of prediction on test sets. The proposed methodology is generally applicable and can be used to address any other sequence classification tasks.

Author Contributions

Conceptualization, H.Z. and C.J.; methodology, C.J. and H.Z.; software, C.J.; writing—original draft preparation, C.J., Z.S., C.K. and K.L.; writing—review and editing, H.Z.; supervision, H.Z.; funding acquisition, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China through Grants (No. 61973174), and Natural Science Foundation of Tianjin City from 2021: Research on intelligent method of protein classification and recognition towards function and annotation.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The pre-training dataset on protein sequences and GO annotations are extracted from UniRef90 (https://www.uniprot.org/uniref/) (accessed on 10 December 2021). The experimental dataset of this paper is from PredPPCrys (https://doi.org/10.1371/journal.pone.0105902.s007) (accessed on 10 December 2021).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Terwilliger, T.C. The success of structural genomics. J. Struct. Funct. Genom. 2011, 12, 43–44. [Google Scholar] [CrossRef] [Green Version]
  2. Becker, E.D. High Resolution NMR: Theory and Chemical Applications; Elsevier: Amsterdam, The Netherlands, 1999. [Google Scholar]
  3. Bradshaw, N.I.; Soares, D.C.; Zou, J.; Kennaway, C.K.; Porteous, D.J. 15:30 structural elucidation of disc1 pathway proteins using electron microscopy, chemical cross-linking and mass spectroscopy. Schizophr. Res. 2012, 136, S74. [Google Scholar] [CrossRef]
  4. Terwilliger, T.C.; Stuart, D.; Yokoyama, S. Lessons from structural genomics. Annu. Rev. Biophys. 2009, 38, 371–383. [Google Scholar] [CrossRef] [Green Version]
  5. Service, R. Structural biology. Structural genomics, round 2. Science 2005, 307, 1554–1558. [Google Scholar] [CrossRef] [PubMed]
  6. Kurgan, L.; Mizianty, M.J. Sequence-Based Protein Crystallization Propensity Prediction for Structural Genomics: Review and Comparative Analysis. Nat. Sci. 2009, 1, 93–106. [Google Scholar] [CrossRef] [Green Version]
  7. Overton, I.M.; Barton, G.J. A normalised scale for structural genomics target ranking: The OB-Score. FEBS Lett. 2006, 580, 4005–4009. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Overton, I.M.; Padovani, G.; Girolami, M.A.; Barton, G.J. ParCrys: A Parzen window density estimation approach to protein crystallization propensity prediction. Bioinformatics 2008, 24, 901–907. [Google Scholar] [CrossRef]
  9. Kurgan, L.; Razib, A.A.; Aghakhani, S.; Dick, S.; Jahandideh, S. CRYSTALP2: Sequence-based protein crystallization propensity prediction. BMC Struct. Biol. 2009, 9, 50. [Google Scholar] [CrossRef] [Green Version]
  10. Slabinski, L.; Jaroszewski, L.; Rychlewski, L.; Wilson, I.A.; Lesley, S.A.; Godzik, A. XtalPred: A web server for prediction of protein crystallizability. Bioinformatics 2007, 23, 3403–3405. [Google Scholar] [CrossRef] [Green Version]
  11. Mizianty, M.J.; Kurgan, L. Sequence-based prediction of protein crystallization, purification and production propensity. Bioinformatics 2011, 27, i24–i33. [Google Scholar] [CrossRef]
  12. Charoenkwan, P.; Shoombuatong, W.; Lee, H.C.; Chaijaruwanich, J.; Huang, H.L.; Ho, S.Y. SCMCRYS: Predicting protein crystallization using an ensemble scoring card method with estimating propensity scores of P-collocated amino acid pairs. PLoS ONE 2013, 8, e72368. [Google Scholar] [CrossRef]
  13. Kandaswamy, K.K.; Pugalenthi, G.; Suganthan, P.; Gangal, R. SVMCRYS: An SVM approach for the prediction of protein crystallization propensity from protein sequence. Protein Pept. Lett. 2010, 17, 423–430. [Google Scholar] [CrossRef]
  14. Wang, H.; Wang, M.; Tan, H.; Li, Y.; Zhang, Z.; Song, J. PredPPCrys: Accurate Prediction of Sequence Cloning, Protein Production, Purification and Crystallization Propensity from Protein Sequences Using Multi-Step Heterogeneous Feature Fusion and Selection. PLoS ONE 2014, 9, e105902. [Google Scholar] [CrossRef]
  15. Wang, H.; Feng, L.; Zhang, Z.; Webb, G.I.; Lin, D.; Song, J. Crysalis: An integrated server for computational analysis and design of protein crystallization. Sci. Rep. 2016, 6, 21383. [Google Scholar] [CrossRef] [Green Version]
  16. Shi, Z.; Zhang, H.; Jin, C.; Quan, X.; Yin, Y. A representation learning model based on variational inference and graph autoencoder for predicting lncRNA-disease associations. BMC Bioinform. 2021, 22, 136. [Google Scholar] [CrossRef] [PubMed]
  17. Jin, C.; Shi, Z.; Zhang, H.; Yin, Y. Predicting lncRNA–protein interactions based on graph autoencoders and collaborative training. In Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Houston, TX, USA, 9–12 December 2021. [Google Scholar]
  18. Jin, C.; Shi, Z.; Lin, K.; Zhang, H. Predicting miRNA-Disease Association Based on Neural Inductive Matrix Completion with Graph Autoencoders and Self-Attention Mechanism. Biomolecules 2022, 12, 64. [Google Scholar] [CrossRef]
  19. Elbasir, A.; Moovarkumudalvan, B.; Kunji, K.; Kolatkar, P.R.; Mall, R.; Bensmail, H. DeepCrystal: A Deep Learning Framework for Sequence-based Protein Crystallization Prediction. Bioinformatics 2018, 35, 2216–2225. [Google Scholar] [CrossRef] [PubMed]
  20. Jin, C.; Gao, J.; Shi, Z.; Zhang, H. ATTCry: Attention-based neural network model for protein crystallization prediction. Neurocomputing 2021, 463, 265–274. [Google Scholar] [CrossRef]
  21. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  22. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.U.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
  23. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  24. Bepler, T.; Berger, B. Learning protein sequence embeddings using information from structure. arXiv 2019, arXiv:1902.08661. [Google Scholar]
  25. Cao, Y.; Shen, Y. TALE: Transformer-based protein function Annotation with joint sequence–Label Embedding. Bioinformatics 2021, 37, 2825–2833. [Google Scholar] [CrossRef]
  26. Rives, A.; Meier, J.; Sercu, T.; Goyal, S.; Lin, Z.; Liu, J.; Guo, D.; Ott, M.; Zitnick, C.L.; Ma, J.; et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. Proc. Natl. Acad. Sci. USA 2021, 118, e2016239118. [Google Scholar] [CrossRef]
  27. Brandes, N.; Ofer, D.; Peleg, Y.; Rappoport, N.; Linial, M. ProteinBERT: A universal deep-learning model of protein sequence and function. bioRxiv 2021. [Google Scholar] [CrossRef] [PubMed]
  28. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
  29. Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Hu, J.; Han, K.; Li, Y.; Yang, J.Y.; Shen, H.B.; Yu, D.J. TargetCrys: Protein crystallization prediction by fusing multi-view features with two-layered SVM. Amino Acids 2016, 48, 2533–2547. [Google Scholar] [CrossRef] [PubMed]
  31. Vivekanandan, S.; Moovarkumudalvan, B.; Lescar, J.; Kolatkar, P.R. Crystallization and X-ray diffraction analysis of the HMG domain of the chondrogenesis master regulator Sox9 in complex with a ChIP-Seq-identified DNA element. Acta Crystallogr. Sect. Struct. Biol. Commun. 2015, 71, 1437–1441. [Google Scholar] [CrossRef] [Green Version]
  32. Palasingam, P.; Jauch, R.; Ng, C.K.L.; Kolatkar, P.R. The structure of Sox17 bound to DNA reveals a conserved bending topology but selective protein interaction platforms. J. Mol. Biol. 2009, 388, 619–630. [Google Scholar] [CrossRef]
Figure 1. Structure of attention module in TLCrys pre-training step.
Figure 1. Structure of attention module in TLCrys pre-training step.
Ijms 23 00972 g001
Figure 2. The architecture of TLCrys consists of two parts: (a) self-supervised pre-training protein representation models on protein sequences and Gene Ontology annotations. (b) supervised fine-tuning on protein crystallization dataset with pre-trained parameters.
Figure 2. The architecture of TLCrys consists of two parts: (a) self-supervised pre-training protein representation models on protein sequences and Gene Ontology annotations. (b) supervised fine-tuning on protein crystallization dataset with pre-trained parameters.
Ijms 23 00972 g002
Table 1. Parameters of TLCrys.
Table 1. Parameters of TLCrys.
Global DimLocal DimDilation RateKernel Size
51212859
Stride SizeKey DimValue DimHead Number
1641284
Table 2. Statistics of datasets
Table 2. Statistics of datasets
TasksDatasetClone f.Material Production f.Purification f.Crystallization f.Crystallization
CLFTrainN:9502P:14428
TestN:1939P:2852
MFTrainN:17017 P:6913
TestN:3347 P:1444
PFTrain- N:2318P:4702
Test- N:474P:932
CFTrain- N:224P:631
Test- N:35P:138
CRYsTrainN:19509 P:4421
TestN:3892 P:899
Table 3. Comparison of our two model with other methods on test sets.
Table 3. Comparison of our two model with other methods on test sets.
ModelAUCMCCACC (%)SPEC (%)SEN (%)PRE (%)F1 Score (%)
CLFPredPPCrys I0.7110.29665.3363.5866.5073.1669.67
PredPPCrys II0.7250.32266.5465.5667.2074.4470.63
Crysalis I0.7310.33266.9866.6067.2275.5671.15
Crysalis II0.7560.36568.3469.9568.3476.8572.35
Direct learning0.7010.32664.6576.1456.8477.8165.69
TLCrys0.8170.45572.9074.2871.9680.4677.00
MFPredPPCrys I0.7720.38069.9368.2172.8849.9559.27
PredPPCrys II0.7930.41671.9571.3673.3052.7061.32
Crysalis I0.7590.37770.2369.9370.9949.2558.15
Crysalis II0.7930.42773.0873.5873.0954.1562.21
Direct learning0.7450.30773.3188.6737.7458.9846.03
TLCrys0.8480.44678.3792.5345.5772.4755.90
PFPredPPCrys I0.8000.46074.8370.5277.0283.7780.25
PredPPCrys II0.8720.57979.7381.4378.8689.3183.76
Crysalis I0.7960.43673.8767.8073.8782.4777.93
Crysalis II0.7930.42773.0873.5873.0954.1562.21
Direct learning0.7780.50578.5260.9773.0954.1562.21
TLCrys0.8610.58381.5870.2587.3485.2486.27
CFPredPPCrys I0.7120.28067.0567.6566.9189.4276.54
PredPPCrys II0.7350.17569.4768.8969.5097.8081.26
Crysalis I0.7390.28165.5070.5964.2389.8074.89
Crysalis II0.7520.33762.5785.2956.9393.9770.90
Direct learning0.6940.12371.1031.4381.1682.3581.75
TLCrys0.7850.45979.7768.5782.6191.2086.69
CRYsParCrys0.6110.13259.6660.5655.9125.4034.93
OBScore0.6380.18459.2857.7865.4927.1438.38
CRYSTAP20.5990.12351.6448.1067.7822.2833.54
XtalPred-0.22465.0465.6162.5129.3139.91
SVMCRYs-0.14255.1152.7865.7023.3934.50
PPCPred0.7040.25463.6362.0970.6729.0341.15
XtalPred-RF-0.20560.9459.6766.4127.5638.95
SCMCRYS-0.14560.9362.0156.2425.4835.07
PredPPCrys I0.7700.32669.6569.3071.1335.2347.12
PredPPCrys II0.8380.42876.0476.2175.3042.6454.45
Crysalis I0.7880.33971.0070.8971.4135.5047.42
Crysalis II0.8380.43576.2776.2876.2042.8454.85
DeepCrystal0.8580.47777.8377.4379.5145.9058.20
Direct learning0.8010.36783.7995.9931.0364.1441.83
TLCrys0.8790.54687.2494.9653.8471.1861.30
Area Under Curve (AUC), Matthew’s Correlation Coefficient (MCC), Accuracy (ACC), Specificity (SPEC), Precision(PRE), Sensitivity (SENS), and Precision (PRE).
Table 4. Ablation experiments on CRYs task.
Table 4. Ablation experiments on CRYs task.
ModelsAUCMCCACC (%)F1 Score (%)
without multi-head attention0.8740.49886.5754.36
2 heads0.8750.53786.6961.32
4 heads0.8800.52986.9959.09
TLCrys (6 heads)0.8790.54687.2461.30
8 heads0.8790.54187.3060.10
Table 5. Predicted probability value of the TLCrys and other predictors for Sox transcription factor proteins.
Table 5. Predicted probability value of the TLCrys and other predictors for Sox transcription factor proteins.
ModelSox9 FL (−)Sox9 HMG (+)Sox17 FL (−)Sox17 HMG (+)Sox17 EK-HMG (+)
TLCrys0.1560.6740.2600.7910.681
DeepCrystal0.3150.6760.4300.6430.633
TargetCrys0.0320.0450.0370.0290.031
Crysalis II0.4740.550.4740.5530.555
Crysalis I0.4380.4820.4870.5670.557
PPCPred0.0390.6580.0890.4620.523
CrystalP20.3270.4590.4700.4360.402
“+” represents crystallizable protein and “−” represents non-crystallizable protein.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jin, C.; Shi, Z.; Kang, C.; Lin, K.; Zhang, H. TLCrys: Transfer Learning Based Method for Protein Crystallization Prediction. Int. J. Mol. Sci. 2022, 23, 972. https://doi.org/10.3390/ijms23020972

AMA Style

Jin C, Shi Z, Kang C, Lin K, Zhang H. TLCrys: Transfer Learning Based Method for Protein Crystallization Prediction. International Journal of Molecular Sciences. 2022; 23(2):972. https://doi.org/10.3390/ijms23020972

Chicago/Turabian Style

Jin, Chen, Zhuangwei Shi, Chuanze Kang, Ken Lin, and Han Zhang. 2022. "TLCrys: Transfer Learning Based Method for Protein Crystallization Prediction" International Journal of Molecular Sciences 23, no. 2: 972. https://doi.org/10.3390/ijms23020972

APA Style

Jin, C., Shi, Z., Kang, C., Lin, K., & Zhang, H. (2022). TLCrys: Transfer Learning Based Method for Protein Crystallization Prediction. International Journal of Molecular Sciences, 23(2), 972. https://doi.org/10.3390/ijms23020972

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop