Next Article in Journal
An Exploration on Z-Number and Its Properties
Previous Article in Journal
Applying the Simulated Annealing Algorithm to the Set Orienteering Problem with Mandatory Visits
Previous Article in Special Issue
Exploring Metaheuristic Optimized Machine Learning for Software Defect Detection on Natural Language and Classical Datasets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Bi-Encoder Model Selection and Ensemble for Text Classification

1
Department of Computer Education, Chuncheon National University of Education, Chuncheon 24328, Republic of Korea
2
Department of Computer Science and Engineering, Incheon National University, Incheon 22012, Republic of Korea
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(19), 3090; https://doi.org/10.3390/math12193090
Submission received: 16 September 2024 / Revised: 30 September 2024 / Accepted: 1 October 2024 / Published: 2 October 2024

Abstract

:
Can bi-encoders, without additional fine-tuning, achieve a performance comparable to fine-tuned BERT models in classification tasks? To answer this question, we present a simple yet effective approach to text classification using bi-encoders without the need for fine-tuning. Our main observation is that state-of-the-art bi-encoders exhibit varying performance across different datasets. Therefore, our proposed approaches involve preparing multiple bi-encoders and, when a new dataset is provided, selecting and ensembling the most appropriate ones based on the dataset. Experimental results show that, for text classification tasks on subsets of the AG News, SMS Spam Collection, Stanford Sentiment Treebank v2, and TREC Question Classification datasets, the proposed approaches achieve performance comparable to fine-tuned BERT-Base, DistilBERT-Base, ALBERT-Base, and RoBERTa-Base. For instance, using the well-known bi-encoder model all-MiniLM-L12-v2 without additional optimization resulted in an average accuracy of 77.84%. This improved to 89.49% through the application of the proposed adaptive selection and ensemble techniques, and further increased to 91.96% when combined with the RoBERTa-Base model. We believe that this approach will be particularly useful in fields such as K-12 AI programming education, where pre-trained models are applied to small datasets without fine-tuning.

1. Introduction

One effective approach to text classification is fine-tuning state-of-the-art transformer models such as BERT [1], DistilBERT [2], ALBERT [3], and RoBERTa [4]. Since the original fine-tuned BERT model outperformed OpenAI’s GPT (GPT-1) [5] in text classification tasks, numerous BERT variants and techniques have been proposed, leading to continuous improvements in text classification accuracy. More recently, approaches utilizing large language models like GPT-4 [6] and LLaMA [7] have also emerged for text classification [8]. However, fine-tuned BERT-based models continue to demonstrate high performance in text classification tasks and remain widely used.
However, these BERT-based fine-tuning approaches are not effective in every situation. While fine-tuning can be very effective in many cases, it can also be a very “expensive” task in some contexts. For example, in environments where it is difficult to use a GPU, applying these approaches is not easy. Additionally, in environments where small amounts of training data are frequently updated, incremental learning and decremental learning are challenging. This is especially true in the field of AI education for primary and secondary school students, where there is a desire to see the results of training immediately [9], making it difficult to apply these approaches. Although lightweight models like TinyBERT [10] are continuously being developed, fine-tuning even these models can still be considered an expensive task in certain situations.
An alternative approach leverages Sentence BERT (SBERT) bi-encoders [11,12], which are trained on sentence pairs using a Siamese network structure composed of identical BERT-based models. These bi-encoders embed sentences into vectors, enabling the assessment of semantic similarity through cosine similarity between the vectors. Originally, the SBERT paper evaluated these bi-encoders on semantic textual similarity datasets, demonstrating their primary design for tasks like sentence retrieval rather than classification. However, once pre-trained, SBERT bi-encoders are highly adaptable and easy to use, making them increasingly popular for text classification tasks. For example, [13] employs the well-known SBERT models, all-MiniLM-L6-v2 and all-mpnet-base-v2, for unsupervised text classification. Similarly, [9] integrates a SBERT bi-encoder in an educational programming environment for children [14], allowing young students to gain hands-on experience with training and inferencing machine learning models.
Our research question is as follows: Can SBERT bi-encoders, without additional fine-tuning, achieve performance comparable to fine-tuned BERT models in classification tasks? To the best of our knowledge, there has been no direct and systematic comparison between SBERT bi-encoder models and fine-tuned BERT models in text classification tasks. In particular, there has been limited exploration of how performance changes with varying dataset sizes. This paper aims to address this research question with the goal of narrowing this research gap.
In this paper, we propose straightforward yet effective approaches to address this question. The proposed approaches are based on the observation that different SBERT bi-encoders often perform particularly well on different datasets. First, we prepare multiple high-performing SBERT bi-encoders. When given a training dataset, we dynamically select the bi-encoders that demonstrate the best performance through validation. These selected bi-encoders are then ensembled, and we use various nearest neighbor techniques for inference. Experimental results on datasets of different sizes show that while the proposed approaches do not outperform BERT-based fine-tuning on large datasets, it generally outperforms them as the dataset size decreases. We believe that the results of this research will be valuable for those considering SBERT bi-encoders for classification tasks, especially in scenarios where BERT-based fine-tuning is challenging.
The main contributions of this paper are as follows:
  • We introduce a foundational version of the proposed approaches, utilizing k-NN text classification with a bi-encoder model (Section 2).
  • We present novel adaptive selection and ensemble techniques, which significantly improve the performance of the basic approaches introduced in the previous section (Section 3).
  • We present experimental results from datasets of various sizes (Section 4.1 and Section 4.2), followed by a comprehensive analysis of the proposed approaches (Section 4.3).

2. k-NN Text Classification Using a Bi-Encoder Model

2.1. Text Classification with a Bi-Encoder Model

We define the text classification task as follows: Given two datasets, a training set and a test set, the objective is to predict the labels for the sentences in the test set based on the sentences and labels in the training set. Formally, the training set is denoted as D train = { ( s i , l ( s i ) ) } i = 1 N train , where N train is the number of training examples, s i represents the i-th sentence in the training set, and l ( s i ) is its corresponding label from a predefined set of possible labels L . Similarly, the test set is denoted as D test = { ( t j , l ( t j ) ) } j = 1 N test , where N test is the number of test examples, t j represents the j-th sentence in the test set, and l ( t j ) is its true label. The goal is to train a model on D train that can accurately predict the label l ( t j ) for each sentence t j in D test . The model’s accuracy is measured by the percentage of correct predictions on the test set.
Our proposed approaches to text classification proceed as follows:
  • Training Phase: We employ a pre-trained Sentence BERT (SBERT) bi-encoder to convert each sentence s i in D train into an embedding vector v ( s i ) . The training set is then updated to D train = { ( s i , l ( s i ) , v ( s i ) ) } i = 1 N train .
  • Inference Phase: For each sentence t j in D test , we generate its corresponding embedding vector v ( t j ) using the same SBERT bi-encoder. The test set is then updated to D test = { ( t j , l ( t j ) , v ( t j ) ) } j = 1 N test . To predict the label for each sentence t j , we compute the cosine similarity between v ( t j ) and the embedding vectors v ( s i ) of all sentences s i in the training set D train . The predicted label l ( t j ) is assigned as follows:
    l ( t j ) = l ( s i * ) where i * = arg max i sim ( v ( t j ) , v ( s i ) ) .
    Here, i * is the index of the sentence in D train whose embedding v ( s i * ) has the highest cosine similarity with the embedding v ( t j ) .

2.2. Enhanced k-NN Classification Techniques

To enhance the basic approach described above, we extend traditional k-nearest neighbor (k-NN) classification techniques in the following ways. We exploit k-NN classification because it is well-suited for handling embedding vectors generated by SBERT bi-encoders.
  • Majority Voting: For each sentence embedding v ( t j ) in D test , we identify the k most similar embeddings from the set of sentence embeddings in D train based on cosine similarity. The predicted label l ( t j ) for the sentence t j in D test is then determined by majority voting among the labels corresponding to these k most similar sentences:
    l ( t j ) = arg max l L i N k ( v ( t j ) ) C ( l ( s i ) = l )
    Here, N k ( v ( t j ) ) represents the indices corresponding to the k most similar embeddings to v ( t j ) in D train , and C ( l ( s i ) = l ) is an indicator function that equals 1 if the label l ( s i ) is equal to l , and 0 otherwise.
  • Weighted Voting: For each sentence embedding v ( t j ) in D test , we identify the k-most similar embeddings v ( s i ) in D train by comparing their cosine similarity scores. Instead of simply counting label occurrences, we sum the similarity scores for each label. The predicted label l ( t j ) for t j is the label with the highest sum of cosine similarity scores:
    l ( t j ) = arg max l L i N k ( v ( t j ) ) sim ( v ( t j ) , v ( s i ) ) · C ( l ( s i ) = l )
    where sim ( v ( t j ) , v ( s i ) ) refers to the cosine similarity between v ( t j ) and v ( s i ) , and N k ( v ( t j ) ) denotes the set of indices for the k-most similar sentence embeddings to v ( t j ) in D train . The indicator function C ( l ( s i ) = l ) returns 1 if l ( s i ) = l , and 0 otherwise.
  • Fair Weighted Voting: To prevent bias towards labels that are more frequent in the training set, we introduce a “fair weighted voting” strategy. For each sentence embedding v ( t j ) in D test , we perform the following steps: For each possible label l L , we select the top m nearest neighbors from the sentence embeddings v ( s i ) in D train that have the label l , based on their cosine similarity to v ( t j ) . The value of m is defined as follows:
    m = min k , min l L N train ( l )
    where k is a predefined constant, and N train ( l ) represents the number of training examples with label l . Once the top m nearest neighbors for each label l are identified, we proceed similarly to the weighted voting method. We calculate the total similarity score for these neighbors and predict the label l ( t j ) for the sentence t j in D test as the label with the highest cumulative score:
    l ( t j ) = arg max l L i M ( v ( t j ) , l ) sim ( v ( t j ) , v ( s i ) )
    Here, M ( v ( t j ) , l ) represents the indices of the top m nearest embeddings with label l in D train that are most similar to the sentence embedding v ( t j ) .

3. Adaptive Selection and Ensemble Techniques

In this section, we introduce additional techniques that expand upon the text classification approaches described in Section 2. Section 3.1 presents a method for dynamically selecting a subset of Sentence BERT (SBERT) bi-encoder models from the available SBERT bi-encoders based on the given dataset. Section 3.2 describes the process of ensembling these selected bi-encoders. In Section 3.3, we present a simple adaptive selection approach that incorporates both existing methods and our proposed approach.

3.1. Adaptive Selection of Bi-Encoders

Given the training set D train = { ( s i , l ( s i ) ) } i = 1 N train and the set of available bi-encoders B = { B 1 , B 2 , , B b } , the process for adaptively selecting the best bi-encoder B * is as follows:
  • Divide the training set D train into F equal-sized folds { D train ( f ) } f = 1 F , where each fold serves as a validation set once, and the remaining F 1 folds form the training subset.
  • For each bi-encoder B i B and for each fold f, train B i on the corresponding training subset and evaluate it on the corresponding validation fold D train ( f ) .
  • Define the cross-validated accuracy A ( B i ) for bi-encoder B i as follows:
    A ( B i ) = 1 F f = 1 F A ( B i , D train ( f ) )
    where A ( B i , D train ( f ) ) represents the accuracy of bi-encoder B i on the f-th validation fold.
  • Select the bi-encoder B * that maximizes the cross-validated accuracy:
    B * = arg max B i B A ( B i )
The selected bi-encoder B * is then used for further processing, applying the strategies described in Section 2 (e.g., majority voting, weighted voting, or fair weighted voting) on the training and test sets.

3.2. Ensemble of Bi-Encoders

The ensemble process is as follows:
  • Perform the F-fold cross-validation process as described in Section 3.1 for each bi-encoder B i B , and compute their respective cross-validated accuracies A ( B i ) .
  • Select the top H bi-encoders with the highest cross-validated accuracies:
    B * = { B ( 1 ) , B ( 2 ) , , B ( H ) } where A ( B ( 1 ) ) A ( B ( 2 ) ) A ( B ( H ) )
    Here, B ( 1 ) is the bi-encoder with the highest cross-validated accuracy, B ( 2 ) is the bi-encoder with the second-highest accuracy, and so on until the H-th best model,  B ( H ) .
  • For each sentence s i in D train , generate H embedding vectors using the selected H models. This results in H vectors v ( 1 ) ( s i ) , v ( 2 ) ( s i ) , , v ( H ) ( s i ) for each sentence s i .
  • Similarly, transform each sentence t j in D test into H embedding vectors v ( 1 ) ( t j ) , v ( 2 ) ( t j ) , , v ( H ) ( t j ) using the same H models.
  • Compute the “ensemble similarity” between a sentence s i in D train and a sentence t j in D test as the average of the cosine similarities between their corresponding vectors from all H models. This can be expressed as follows:
    sim ensemble ( s i , t j ) = 1 H h = 1 H sim ( v ( h ) ( s i ) , v ( h ) ( t j ) )
  • Use the computed sim ensemble ( s i , t j ) during the inference phase for the process of finding the nearest neighbors as described in Section 2.
This approach leverages the strengths of multiple top-performing bi-encoders, enhancing the accuracy of the text classification model.

3.3. Adaptive Selection of Existing and Proposed Approaches

Recall that our proposed approaches do not require a fine-tuning process, making it particularly effective when fine-tuning is not feasible or applicable. However, in scenarios where fine-tuning is possible and both BERT models and bi-encoder models are available, adaptive selection between the BERT models and the proposed approaches becomes an option. Assume we have a BERT model M BERT and our model M OURS . The process of adaptive selection between these approaches is as follows:
  • Fine-tune the BERT model M BERT on 90% of the training set D train , while reserving 10% as a validation set D val .
  • Calculate the validation accuracy A ( M BERT , D val ) .
  • If A ( M BERT , D val ) exceeds a threshold τ , use M BERT for inference on the test set D test . Otherwise, use our model M OURS for inference on D test .
Here, our model M OURS may already incorporate the techniques discussed in Section 2, Section 3.1, and Section 3.2. Therefore, there could be cases where adaptive selection is applied at two stages—first during the process described in Section 3.1, and again during the selection between M BERT and M OURS . If the existing approaches perform better on specific data, and the proposed approaches are more effective on other data, this method can be expected to result in better performance.

4. Experiments

In this section, we present the experimental results from various combinations of the proposed approaches across different datasets. Section 4.1 describes the experimental setup. The results are then presented in Section 4.2, followed by a brief analysis in Section 4.3.

4.1. Experimental Setup

4.1.1. Datasets

For our experiments, we utilize four types of datasets. The first is the Stanford Sentiment Treebank (SST) dataset [15], a widely used sentiment analysis dataset from the General Language Understanding Evaluation (GLUE) benchmark [16]. The second dataset is the AG News dataset [17], which is highly popular and featured in the official PyTorch tutorial. The third dataset is the SMS Spam Collection dataset [18], another widely used dataset containing samples related to spam and non-spam messages. The fourth dataset is the Text REtrieval Conference (TREC) Question Classification dataset [19], which includes labeled questions in both the training and test sets.
Although the SST dataset provides a validation set, the other datasets do not. To maintain consistency across all experiments, we decided not to use the validation set from the SST dataset either. Therefore, all datasets were configured to include only a training set and a test set. If validation is required, a portion of the training set should be used.
In this paper, we assume that Sentence BERT (SBERT) bi-encoders are primarily applied to tasks involving small datasets. To reflect this assumption, we used the original datasets and also created smaller subsets by reducing the size of each dataset to 1/10, 1/100, and 1/1000 of its original size. However, if any of these reduced datasets resulted in fewer than five samples per class, we excluded them from the experiments. Based on this approach, we prepared a total of 13 datasets for our experiments.
Table 1 shows the statistics for the training datasets, and Table 2 provides the statistics for the test datasets. These tables list the dataset names, the number of classes, the total number of samples, and the sample size for the class with the fewest samples (minimum class sample size), the average class sample size, and the sample size for the class with the most samples (maximum class sample size). Each training set is paired with a test set that shares the same prefix.

4.1.2. Existing Models

In our experiments, we selected the base models of BERT, DistilBERT, ALBERT, and RoBERTa for comparison with our proposed approaches. The authors of the original BERT paper found that fine-tuning the BERT model for 2, 3, or 4 epochs yielded good performance across various tasks. Consequently, we initially fine-tuned all models for 4 epochs on our datasets. However, this number of epochs was insufficient for our datasets, so we also fine-tuned the models for 10 epochs. In the remainder of this paper, models fine-tuned for 4 epochs are referred to as BERT, DistilBERT, ALBERT, and RoBERTa, while those fine-tuned for 10 epochs are denoted as BERT-10, DistilBERT-10, ALBERT-10, and RoBERTa-10.
We also conducted experiments using 5-fold cross-validation and early stopping. However, early stopping did not produce satisfactory results for our datasets, and thus, we do not discuss these results in subsequent sections. For example, when training BERT on the entire TREC dataset with early stopping (using a patience of 1 epoch), the model achieved an accuracy of 95.20%, which was lower than the accuracy obtained after 4 epochs (96.60%) and 10 epochs (97.20%) without early stopping.
We tokenized the dataset using the AutoTokenizer class from the Hugging Face Transformers library, applying truncation to ensure input sequences fit within the model’s maximum length. Additionally, dynamic padding was implemented via the DataCollatorWithPadding class, ensuring that sequences in each batch were padded to the same length during training.

4.1.3. Proposed Models

For the proposed approaches, we selected 13 Sentence-BERT (SBERT) bi-encoder models available on sbert.net as of September 2024. These models were chosen based on their “average performance on encoding sentences across 14 diverse tasks from different domains”, as reported on the website. Models exceeding 1GB in size were excluded, as our goal was to demonstrate that even relatively smaller bi-encoder models can achieve a high level of accuracy. Among these are widely used models such as all-mpnet-base-v2, all-distilroberta-v1, all-MiniLM-L12-v2, and all-MiniLM-L6-v2. Table 3 lists each model’s name, its number of embedding dimensions, and its number of parameters, with models abbreviated as Bi-encoder 1, 2, ⋯, 13 for simplicity.
We conducted experiments on four types of approaches described below using techniques from Section 2 and Section 3. We set F to 5, k to 10, and τ to 0.8 with the all-MiniLM-L12-v2 model as the default bi-encoder.
  • Approaches Without Pre-trained Transformers: These models perform text classification based on the training and inference phases described in Section 2, without using transformer models. The “GloVe.6B.300d” model generates embedding vectors with the GloVe model [20] and classifies text by finding the most similar instances. The “Jaccard+Word1Gram” and “Jaccard+Char3Gram” models skip embedding generation, directly finding the most similar instances using Jaccard similarity based on word-level 1-grams and character-level 3-grams, respectively. Similarly, the “Cosine+TFIDF” model finds the most similar instances using cosine similarity applied to tf-idf values. These tf-idf values are calculated from the training data using the TfidfVectorizer class from the scikit-learn library [21].
  • Bi-Encoder-Based Approaches: These approaches use Sentence BERT (SBERT) bi-encoder models. “Model NN” employs the default bi-encoder, following the training and inference phases described in Section 2, which finds the most similar embeddings. “Model MV” extends this by incorporating the majority voting technique. “Model WV” is similar to “Model MV” but uses weighted voting. Finally, “Model FW” builds on “Model WV” by employing our fair weighted voting technique.
  • Bi-Encoder + Adaptive Selection: This approach “Model FW-AS” incorporates the proposed adaptive selection technique described in Section 3.1, along with our fair weighted voting method described in Section 2.
  • Bi-Encoder + Adaptive Selection + Ensemble: These approaches use the techniques proposed in Section 2, Section 3.1, and Section 3.2. “Model FW-AS-2BI” extends “Model FW-AS” by applying the ensemble technique from Section 3.2, with H set to 2. “Model FW-AS-3BI” is similar to “Model FW-AS-2BI,” but with H set to 3.
  • Bi-Encoder + Adaptive Selection + Ensemble + Existing Approach: This approach “Model FW-AS-2BI-BT” extends “Model FW-AS-2BI” by incorporating the RoBERTa-10 model, following the technique described in Section 3.3.

4.2. Experimental Results

Table 4 presents the experimental results comparing the existing approaches and the proposed approaches across the 13 datasets. Because the 13 datasets can be grouped into four categories (“SST”, “AGNEWS”, “SMSSpam”, “TREC”), we also report the average accuracy for each category. The overall average is calculated as the micro-average of all accuracies across the 13 datasets.
The results show that models trained for 10 epochs (BERT-10, DistilBERT-10, ALBERT-10, RoBERTa-10) outperformed those trained for 4 epochs (BERT, DistilBERT, ALBERT, RoBERTa). This contrasts with the original BERT paper, which recommended training for two to four epochs and may be due to the characteristics of our datasets. As mentioned in Section 4.1.2, early stopping did not yield positive results and was therefore not included. RoBERTa consistently performed better than BERT, DistilBERT, and ALBERT, with RoBERTa-10 achieving the highest average accuracy at 89.78%.
In the “Approaches Without Pre-trained Transformers” category, the overall average accuracy was relatively low. The GloVe model demonstrated performance similar to the Jaccard and cosine similarity-based approaches, with an average accuracy of 73.26%, suggesting that GloVe may not be well suited for text classification tasks. Conversely, it is notable that the Jaccard and cosine similarity models achieved comparable accuracy despite their simplicity.
For the “Bi-Encoder-Based Approaches,” Model NN, which adheres to the basic training and inference phases described in Section 2, showed limited performance, with an accuracy of 77.84%. However, applying additional techniques in Model MV, Model WV, and Model FW led to improved results, with all models achieving average accuracies above 82%. These results emphasize the importance of using such techniques when working with bi-encoder models without fine-tuning. Among these, our proposed Model FW performed the best, especially in scenarios with smaller datasets.
When adaptive selection was applied, as detailed in Section 3.1, the average accuracy increased to 88.06%. Without adaptive selection, high performance was observed in the AGNEWS and SMSSpam datasets, while the SST and TREC datasets showed lower results. Adaptive selection improved accuracy in these cases as well.
Additionally, the application of the ensemble technique, as discussed in Section 3.2, following adaptive selection, resulted in consistently high accuracy across most datasets. For instance, in the SST 0.1% dataset, accuracy improved from below 70% with only adaptive selection to nearly 80% after introducing the ensemble technique. This suggests that the ensemble compensated for weaknesses in individual bi-encoders. There was only a slight difference in performance between the two-model ensemble (Model FW-AS-2BI) and the three-model ensemble (Model FW-AS-3BI). Model FW-AS-2BI achieved an average accuracy of 89.49%, which closely matched RoBERTa-10’s accuracy of 89.78%. Our proposed approaches demonstrate the advantage of achieving comparable classification accuracy without the need for fine-tuning, making it particularly useful in cases where fine-tuning is impractical.
Although the main advantage of the proposed approaches is that it does not require fine-tuning, it can be integrated with existing BERT models when fine-tuning is possible. As expected, combining our best-performing model (Model FW-AS-2BI) with the highest-performing existing model (RoBERTa-10) resulted in the highest average accuracy of 91.96%. This exceeds the 89.78% accuracy achieved by RoBERTa-10, which demonstrated the best performance among the existing approaches. Repeated trials yielded consistent results, and an unpaired two-tailed t-test conducted across five iterations for both models produced a p-value of less than 0.00001 (p-value < 0.05), indicating a statistically significant difference.

4.3. Analysis of Results

According to the experimental results in Section 4.2, the performance of existing approaches declines significantly as the training dataset size decreases. Figure 1 focuses on the two smallest training datasets from each of the four categories (SST, AGNEWS, SMSSpam, TREC). Four models represent existing approaches: BERT-10, DistilBERT-10, ALBERT-10, and RoBERTa-10. The remaining four models represent the proposed approaches: Model NN, Model FW-AS, Model FW-AS-2BI, and Model FW-AS-2BI-BT.
As shown in Figure 1, all models experienced a drop in accuracy as the training dataset size decreased. This result was expected because the test dataset remained fixed while the training set was reduced. However, the decline in accuracy for existing approaches was particularly pronounced. For example, all existing approaches achieved over 80% accuracy on the SST 1% dataset but dropped below 60% on the SST 0.1% dataset. In contrast, Model FW-A-2BI showed only a slight decline, maintaining relatively high accuracy despite the reduced training dataset size.
Our final two models, Model FW-AS-2BI and Model FW-AS-2BI-BT, consistently performed well across different datasets. By contrast, Model NN showed relatively low performance across all datasets except SMSSpam. Although Model NN performed well on SMSSpam, even the Jaccard+Char3Gram model, based on Jaccard similarity, achieved over 90% average accuracy on these datasets. This suggests that relying solely on a basic bi-encoder is not effective for text classification tasks. It also indicates that applying at least the proposed adaptive selection technique, even without the ensemble technique, is necessary when using bi-encoders for classification tasks.
Recall that we selected bi-encoder 4 (all-MiniLM-L12-v2), one of the most widely used models, as the default for Model NN. Table 5 presents the accuracy of Model NN when different bi-encoders were used in place of bi-encoder 4. According to sbert.net, bi-encoders with smaller indices showed better performance on average across 14 diverse tasks. Specifically, bi-encoder 1 consistently performed better than bi-encoder 2, bi-encoder 2 outperformed bi-encoder 3, and so on, with bi-encoder 13 ranking the lowest among the 13 selected models. We aimed to confirm whether this ranking would hold in our experiments.
The results were surprising. In our experiments, bi-encoder 8, ranked eighth, achieved the highest average accuracy, followed by bi-encoders 1, 2, 3, 12, 10, 13, 4, 5, 9, 6, 11, and 7. A key observation is the significant variation in bi-encoder performance depending on the dataset. For the SST datasets, bi-encoder 8 achieved the highest average accuracy, while bi-encoder 1 demonstrated superior performance on the AGNEWS datasets. Bi-encoder 13 showed the strongest performance on the SMSSpam datasets, and bi-encoder 12 produced the best results on the TREC datasets. Additionally, some bi-encoders performed best for specific dataset sizes; for example, bi-encoder 5 performed best on the AGNEWS 100% dataset, while bi-encoder 2 achieved the highest accuracy on the AGNEWS 10% dataset. This variability in top-performing bi-encoders across different dataset types and sizes underscores the effectiveness of Model FW-AS, which employs adaptive selection, and Model FW-AS-2BI, which also incorporates ensemble techniques.
Figure 2 compares the performance of the 13 bi-encoder models and three of the proposed models (Model FW-AS, Model FW-AS-2BI, and Model FW-AS-2BI-BT). Every bi-encoder model showed notably low performance in at least one dataset category. For instance, although Bi-encoder 1 is regarded as one of the best-performing bi-encoder models, its average accuracy remained in the 60% range on the TREC datasets. In contrast, Model FW-AS maintained an average accuracy of approximately 80%, Model FW-AS-2BI around 85%, and Model FW-AS-2BI-BT around 89% on the lowest-performing datasets. These results emphasize the importance of adaptive selection and ensemble techniques in improving bi-encoder performance for classification tasks.
To further investigate the performance improvements from adaptive selection and ensemble techniques, we applied LIME [22], an explainable AI method. LIME was used to identify the three words that most influenced the creation of the embedding vectors for each of the first 100 sentences in the TREC test dataset across various bi-encoder models. This resulted in 300 words per model, which were visualized as word clouds based on their frequency, as shown in Figure 3. The word clouds revealed notable differences between models, highlighting substantial variation in their embedding approaches. For instance, Bi-encoder 11 (distiluse-base-multilingual-cased-v1), which achieved the highest performance on the TREC dataset, had some overlap in its top five most frequent words with other bi-encoders; however, none of them had an identical top five word set. These distinctions in embedding approaches likely contributed to the performance improvements observed with adaptive selection and ensemble techniques.

5. Related Work

The Sentence BERT (SBERT) bi-encoder is a Siamese network architecture based on the BERT model that generates effective sentence embeddings, primarily used for semantic similarity search. While cross-encoder models like BERT and RoBERTa can compute sentence similarities by jointly encoding pairs of sentences, they become significantly slower when handling large numbers of sentences due to increased computational complexity.
SBERT bi-encoders are commonly applied in search tasks but are also valuable for classification due to their ease of use. For instance, PatentSBERTa [23] enhances SBERT with Augmented SBERT [24] to create patent embeddings, which are then classified using the k-nearest neighbors algorithm. Another approach [25] employs SBERT bi-encoder embeddings to train feedforward neural networks for classifying academic paper abstracts into one of seven categories. Similarly, DocSCAN [26] uses SBERT bi-encoders to generate text embeddings and constructs a weakly supervised training set based on the intuition that adjacent vectors in the embedding space should share the same label, enabling unsupervised text classification. The method presented in [13] involves creating initial label vectors by vectorizing label keywords; these vectors are then used to find similar candidate documents. Afterward, these documents are vectorized to form refined label vectors. Classification is performed by identifying nearest neighbors among document vectors using these refined label vectors.
Despite these diverse applications, there is still a lack of studies utilizing SBERT bi-encoders for traditional supervised classification tasks without training additional neural networks or performing fine-tuning. Therefore, we propose various techniques for leveraging SBERT bi-encoders directly for classification tasks. These methods are highly effective, especially in scenarios where fine-tuning may not be feasible, such as in K-12 machine learning education. For instance, in block-based programming environments commonly used in K-12 settings, students typically interact with AI by dragging and dropping blocks, leading to frequent incremental and decremental learning. In such cases, fine-tuning models is often not feasible. However, our proposed methods do not require fine-tuning, making them easily adaptable to these environments. These techniques can also be explored for applications in other fields, such as software defect detection [27], classification of medical subjects from text-based health counseling data [28], and classification tasks involving low-resource languages [29].

6. Conclusions

In this paper, we presented an efficient Sentence BERT (SBERT) bi-encoder-based text classification approach without fine-tuning neural networks. Our main strategy was to apply adaptive selection and ensemble techniques using various types of bi-encoder models. This approach is based on the intuition that no single bi-encoder outperforms all others; instead, each bi-encoder has its own strengths and performs exceptionally well on specific types of datasets. Our experimental results are noteworthy: in contrast to the well-known belief that cross-encoder models outperform bi-encoder models, our bi-encoder-based approach without fine-tuning shows comparable results to the fine-tuned base models of BERT, DistilBERT, ALBERT, and RoBERTa. We expect that the proposed approaches will be highly useful in scenarios where fine-tuning is not possible or feasible, as using bi-encoder models allows us to achieve a good level of performance in text classification tasks.
The limitations of this paper can be summarized in three main points. First, storing many bi-encoder models requires substantial storage space. While this may not be an issue for most users, it could pose a challenge for those with low-performance computers, such as very young students. Second, the time required for validation during adaptive selection can be relatively long. Currently, cross-validation is used to identify the best models; however, as the number of models increases, validation time grows significantly. In cases where many models are used and fast execution time is required, additional techniques may need to be considered to improve validation speed. Third, because the performance of the selected bi-encoders significantly affects the final results, it is important for users to prepare a diverse set of bi-encoders when applying adaptive selection and ensemble techniques to enhance performance.
Our future research direction is threefold: First, we plan to apply our techniques to real-world scenarios where fine-tuning is not easily employed, such as in K-12 machine learning education. Second, we intend to refine our current approaches to improve execution time by developing more lightweight bi-encoder models, enabling more effective use in practical applications. Third, we aim to devise novel approaches for text classification based on text-matching algorithms such as Jaccard and cosine similarity, even without using deep learning. In our experiments, we found that these similarity-based approaches can achieve high performance depending on the dataset. This suggests that text-matching methods can be useful for classification tasks in various scenarios, especially when extremely lightweight models are required.

Author Contributions

Conceptualization, Y.P. and Y.S.; methodology, Y.P. and Y.S.; investigation, Y.P. and Y.S.; data curation, Y.P. and Y.S.; writing—original draft preparation, Y.P. and Y.S.; writing—review and editing, Y.P. and Y.S.; funding acquisition, Y.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Chuncheon National University of Education Grant in 2023.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  2. Sanh, V.; Debut, L.; Chaumond, J.; Wolf, T. DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. arXiv 2019, arXiv:1910.01108. [Google Scholar]
  3. Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; Soricut, R. Albert: A lite BERT for self-supervised learning of language representations. arXiv 2019, arXiv:1909.11942. [Google Scholar]
  4. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
  5. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding with Unsupervised Learning. Technical Report. OpenAI. 2018. Available online: https://openai.com/index/language-unsupervised/ (accessed on 30 September 2024).
  6. OpenAI. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
  7. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. Llama: Open and efficient foundation language models. arXiv 2023, arXiv:2302.13971. [Google Scholar]
  8. Li, Z.; Li, X.; Liu, Y.; Xie, H.; Li, J.; Wang, F.-L.; Li, Q.; Zhong, X. Label supervised llama finetuning. arXiv 2023, arXiv:2310.01208. [Google Scholar]
  9. Park, Y.; Shin, Y. A block-based interactive programming environment for large-scale machine learning education. Appl. Sci. 2022, 12, 13008. [Google Scholar] [CrossRef]
  10. Jiao, X.; Yin, Y.; Shang, L.; Jiang, X.; Chen, X.; Li, L.; Wang, F.; Liu, Q. TinyBERT: Distilling BERT for natural language understanding. arXiv 2019, arXiv:1909.10351. [Google Scholar]
  11. Reimers, N.; Gurevych, I. Sentence-BERT: Sentence embeddings using siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China, 3–7 November 2019. [Google Scholar]
  12. Reimers, N.; Gurevych, I. Making monolingual sentence embeddings multilingual using knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Online, 16–20 November 2020. [Google Scholar]
  13. Schopf, T.; Braun, D.; Matthes, F. Evaluating unsupervised text classification: Zero-shot and similarity-based approaches. In Proceedings of the 2022 6th International Conference on Natural Language Processing and Information Retrieval, Sanya, China, 16–18 December 2022; pp. 6–15. [Google Scholar]
  14. Park, Y.; Shin, Y. Tooee: A novel scratch extension for K-12 big data and artificial intelligence education using text-based visual blocks. IEEE Access 2021, 9, 149630–149646. [Google Scholar] [CrossRef]
  15. Socher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C.D.; Ng, A.; Potts, C. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, WA, USA, 18–21 October 2013; pp. 1631–1642. [Google Scholar]
  16. Wang, A.; Singh, A.; Michael, J.; Hill, F.; Levy, O.; Bowman, S.R. GLUE: A multi-task benchmark and analysis platform for natural language understanding. arXiv 2018, arXiv:1804.07461. [Google Scholar]
  17. Del Corso, G.M.; Gulli, A.; Romani, F. Ranking a stream of news. In Proceedings of the 14th International Conference on World Wide Web, Chiba, Japan, 10–14 May 2005; pp. 97–106. [Google Scholar]
  18. Almeida, T.A.; Hidalgo, J.M.G.; Yamakami, A. Contributions to the study of SMS spam filtering: New collection and results. In Proceedings of the 11th ACM Symposium on Document Engineering, Mountain View, CA, USA, 19–22 September 2011; pp. 259–262. [Google Scholar]
  19. Li, X.; Roth, D. Learning question classifiers. In Proceedings of the 19th International Conference on Computational Linguistics, Taipei, Taiwan, 24 August–1 September 2002. [Google Scholar]
  20. Pennington, J.; Socher, R.; Manning, C.D. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
  21. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
  22. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?” Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
  23. Bekamiri, H.; Hain, D.S.; Jurowetzki, R. PatentsBERTA: A deep NLP-based hybrid model for patent distance and classification using augmented SBERT. Technol. Forecast. Soc. Chang. 2024, 206, 123536. [Google Scholar] [CrossRef]
  24. Thakur, N.; Reimers, N.; Daxenberger, J.; Gurevych, I. Augmented SBERT: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, 6–11 June 2021; pp. 296–310. [Google Scholar]
  25. Piao, G. Scholarly text classification with sentence BERT and entity embeddings. In Proceedings of the Trends and Applications in Knowledge Discovery and Data Mining: PAKDD 2021 Workshops, Delhi, India, 11 May 2021; Springer International Publishing: Cham, Switzerland, 2021; pp. 79–87. [Google Scholar]
  26. Stammbach, D.; Ash, E. Docscan: Unsupervised text classification via learning from neighbors. arXiv 2021, arXiv:2105.04024. [Google Scholar]
  27. Petrovic, A.; Jovanovic, L.; Bacanin, N.; Antonijevic, M.; Savanovic, N.; Zivkovic, M.; Milovanovic, M.; Gajic, V. Exploring Metaheuristic Optimized Machine Learning for Software Defect Detection on Natural Language and Classical Datasets. Mathematics 2024, 12, 2918. [Google Scholar] [CrossRef]
  28. Sung, Y.W.; Park, D.S.; Kim, C.G. A Study of BERT-Based Classification Performance of Text-Based Health Counseling Data. CMES-Comput. Model. Eng. Sci. 2023, 135, 1–20. [Google Scholar]
  29. Veisi, H.; Awlla, K.M.; Abdullah, A.A. KuBERT: Central Kurdish BERT Model and Its Application for Sentiment Analysis. Res. Sq. 2024. [Google Scholar] [CrossRef]
Figure 1. Comparison of accuracy across 8 selected models on small-scale datasets.
Figure 1. Comparison of accuracy across 8 selected models on small-scale datasets.
Mathematics 12 03090 g001
Figure 2. Comparison of average accuracies between bi-encoder models and the proposed models across the four categories.
Figure 2. Comparison of average accuracies between bi-encoder models and the proposed models across the four categories.
Mathematics 12 03090 g002
Figure 3. Word clouds illustrating the most influential words for generating embedding vectors from the first 100 sentences of the TREC test dataset, for each Sentence BERT bi-encoder.
Figure 3. Word clouds illustrating the most influential words for generating embedding vectors from the first 100 sentences of the TREC test dataset, for each Sentence BERT bi-encoder.
Mathematics 12 03090 g003
Table 1. Summary of the 13 training datasets used in the experiments.
Table 1. Summary of the 13 training datasets used in the experiments.
Training Dataset# of Classes# of SamplesAverage Class Sample SizeMinimum Class Sample SizeMaximum Class Sample Size
SST 100%267,34933,67529,78037,569
SST 10%26734336729323802
SST 1%2673337291382
SST 0.1%267343136
AGNEWS 100%4120,00030,00030,00030,000
AGNEWS 10%412,000300029643019
AGNEWS 1%41200300269320
AGNEWS 0.1%4120302242
SMSSpam 100%2507425376864388
SMSSpam 10%250725455452
SMSSpam 1%25025842
TREC 100%65452909861250
TREC 10%6545916125
Table 2. Summary of the 4 test datasets used in the experiments.
Table 2. Summary of the 4 test datasets used in the experiments.
Test Dataset# of Classes# of SamplesAverage Class Sample SizeMinimum Class Sample SizeMaximum Class Sample Size
SST2872436428444
AGNEWS47600190019001900
SMSSpam250025061439
TREC6500839138
Table 3. Summary of the 13 bi-encoders used in the experiments.
Table 3. Summary of the 13 bi-encoders used in the experiments.
Bi-Encoder IDModel Name# of Embedding Dimensions# of Parameters
Bi-encoder 1all-mpnet-base-v2768109,486,464
Bi-encoder 2multi-qa-mpnet-base-dot-v1768109,486,464
Bi-encoder 3all-distilroberta-v176882,118,400
Bi-encoder 4all-MiniLM-L12-v238433,360,000
Bi-encoder 5multi-qa-distilbert-cos-v176866,362,880
Bi-encoder 6all-MiniLM-L6-v238422,713,216
Bi-encoder 7multi-qa-MiniLM-L6-cos-v138422,713,216
Bi-encoder 8paraphrase-multilingual-mpnet-base-v2768278,043,648
Bi-encoder 9paraphrase-albert-small-v276811,683,584
Bi-encoder 10paraphrase-multilingual-MiniLM-L12-v2384117,653,760
Bi-encoder 11paraphrase-MiniLM-L3-v238417,389,824
Bi-encoder 12distiluse-base-multilingual-cased-v1512135,127,808
Bi-encoder 13distiluse-base-multilingual-cased-v2512135,127,808
Table 4. Experimental results for existing and our proposed approaches across different datasets.
Table 4. Experimental results for existing and our proposed approaches across different datasets.
Model NameSSTAGNEWSSMSSpamTRECTotal
100%10%1%0.1%Avg.100%10%1%0.1%Avg.100%10%1%Avg.100%10%Avg.Avg.
Existing Approaches
BERT90.8389.4587.9650.4679.6894.5092.3289.5774.6787.7798.8098.6096.2097.8796.6090.6093.6088.50
BERT-1091.7490.6087.6155.9681.4894.2292.1788.8678.6288.4799.4098.6094.6097.5397.2090.4093.8089.23
DistilBERT91.1785.7884.2949.0877.5894.2191.7189.2983.0189.5699.4098.0087.8095.0797.0088.8092.9087.66
DistilBERT-1089.7989.4583.0351.7278.5094.3791.2588.6783.7089.5099.2098.8087.8095.2797.0090.6093.8088.11
ALBERT89.1184.0680.7350.0075.9893.6791.1788.5373.8386.8098.6096.6096.4097.2094.2088.8091.5086.59
ALBERT-1091.2888.8887.0451.4979.6794.0491.5388.0777.2287.7299.2098.6094.4097.4096.0090.2093.1088.30
RoBERTa93.8192.2088.4249.0880.8895.1392.7190.3325.0075.8099.6099.4087.8095.6097.4085.4091.4084.33
RoBERTa-1094.0492.8988.8849.0881.2294.4592.3889.9987.5991.1099.6099.0087.8095.4797.6093.8095.7089.78
Proposed Approaches
Approaches Without Pretrained Transformers
GloVe.6B.300d64.6863.1959.5255.1660.6488.9585.5083.5778.0384.0196.4094.4089.6093.4751.8041.6046.7073.26
Jaccard+Word1Gram60.6761.2453.4452.4156.9484.1374.0460.9642.4165.3998.0094.6089.0093.8781.2054.6067.9069.75
Jaccard+Char3Gram66.1762.1658.1451.2659.4388.3382.0370.4354.0973.7298.8096.0093.2096.0079.6061.0070.3073.94
Cosine+TFIDF70.0766.7459.9853.9062.6788.2683.7074.7957.0375.9598.4096.0093.0095.8064.0065.6064.8074.73
Bi-Encoder-Based Approaches
Model NN74.4366.0665.1464.1167.4490.0087.4983.2477.0184.4498.6095.0093.6095.7364.8052.4058.6077.84
Model MV73.7472.4871.7966.0671.0292.1190.2687.0582.7588.0498.6095.4095.6096.5376.6064.8070.7082.10
Model WV74.6672.7171.9065.9471.3092.1390.2187.1883.0888.1598.6095.4095.4096.4776.4064.8070.6082.19
Model FW74.8972.5970.8767.0971.3692.4990.4587.6284.1488.6899.0095.6096.0096.8778.6063.4071.0082.52
Bi-Encoder + Adaptive Selection
Model FW-AS83.4983.9484.4069.8480.4292.4991.0488.4684.7489.1899.4099.2097.6098.7389.6080.6085.1088.06
Bi-Encoder + Adaptive Selection + Ensemble
Model FW-AS-2BI86.0186.4786.2479.9384.6692.8891.0488.6185.5889.5399.6099.4097.8098.9389.0080.8084.9089.49
Model FW-AS-3BI85.0985.7885.6779.7084.0692.8891.1888.6685.9689.6799.6099.2097.8098.8789.4080.4084.9089.33
Bi-Encoder + Adaptive Selection + Ensemble + Existing Approach
Model FW-AS-2BI-BT94.0493.3588.7679.9389.0294.7592.0589.5487.0090.8499.4099.2087.8095.4797.2092.4094.8091.96
Table 5. Accuracy comparison across different datasets for various types of bi-encoders.
Table 5. Accuracy comparison across different datasets for various types of bi-encoders.
Bi-Encoder IDSSTAGNEWSSMSSpamTRECTotal
100%10%1%0.1%Avg.100%10%1%0.1%Avg.100%10%1%Avg.100%10%Avg.Avg.
Bi-encoder 181.3176.0375.4676.9577.4490.3287.5784.3779.0485.3298.2096.0094.6096.2770.0056.6063.3082.03
Bi-encoder 276.1573.2863.5367.8970.2190.2087.7583.9977.4584.8598.8098.4095.8097.6765.8058.0061.9079.77
Bi-encoder 372.9469.0464.1160.0966.5490.0587.0083.9278.1184.7799.2096.4093.4096.3373.8068.4071.1079.73
Bi-encoder 474.4366.0665.1464.1167.4390.0087.4983.2477.0184.4398.6095.0093.6095.7364.8052.4058.6077.84
Bi-encoder 568.2364.3357.4558.0362.0190.9287.6183.5578.0185.0298.4095.8094.6096.2767.0057.2062.1077.01
Bi-encoder 668.2363.7661.5862.8464.1190.3287.4283.0876.2584.2797.6096.2093.4095.7362.6053.2057.9076.65
Bi-encoder 765.1462.9657.9156.4260.6190.2587.3482.4277.8384.4698.8095.0095.0096.2762.6055.4059.0075.93
Bi-encoder 879.4778.9078.2175.3477.9889.9986.3782.4175.8283.6499.2097.4095.0097.2070.0060.8065.4082.22
Bi-encoder 972.2569.8466.7465.3768.5588.5484.6779.8868.4180.3898.8095.6095.0096.4763.2054.0058.6077.10
Bi-encoder 1075.4673.9771.7970.1872.8589.7085.4280.3472.5882.0197.4097.0096.0096.8066.6057.2061.9079.51
Bi-encoder 1171.3368.5865.4865.0267.6089.2885.2980.0069.5781.0399.2096.6093.6096.4757.6047.0052.3076.04
Bi-encoder 1266.8664.9160.6759.7563.0490.1486.5383.0175.0183.6799.4098.8097.4098.5379.4072.6076.0079.58
Bi-encoder 1367.8964.5660.2158.1462.7090.1686.8082.7475.0983.7099.6098.8097.6098.6776.8071.6074.2079.23
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, Y.; Shin, Y. Adaptive Bi-Encoder Model Selection and Ensemble for Text Classification. Mathematics 2024, 12, 3090. https://doi.org/10.3390/math12193090

AMA Style

Park Y, Shin Y. Adaptive Bi-Encoder Model Selection and Ensemble for Text Classification. Mathematics. 2024; 12(19):3090. https://doi.org/10.3390/math12193090

Chicago/Turabian Style

Park, Youngki, and Youhyun Shin. 2024. "Adaptive Bi-Encoder Model Selection and Ensemble for Text Classification" Mathematics 12, no. 19: 3090. https://doi.org/10.3390/math12193090

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop