Next Article in Journal
Analyzing the Influence of Telematics-Based Pricing Strategies on Traditional Rating Factors in Auto Insurance Rate Regulation
Previous Article in Journal
Fuzzy Linear Temporal Logic with Quality Constraints
Previous Article in Special Issue
TPE-Optimized DNN with Attention Mechanism for Prediction of Tower Crane Payload Moving Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Adaptation: Enhancing Multilingual Models for Low-Resource Language Translation

by
Ilhami Sel
* and
Davut Hanbay
Department of Computer Engineering, Faculty of Engineering, Inonu University, Malatya 44200, Turkey
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(19), 3149; https://doi.org/10.3390/math12193149
Submission received: 7 August 2024 / Revised: 30 September 2024 / Accepted: 3 October 2024 / Published: 8 October 2024

Abstract

This study focuses on the neural machine translation task for the TR-EN language pair, which is considered a low-resource language pair. We investigated fine-tuning strategies for pre-trained language models. Specifically, we explored the effectiveness of parameter-efficient adapter methods for fine-tuning multilingual pre-trained language models. Various combinations of LoRA and bottleneck adapters were experimented with. The combination of LoRA and bottleneck adapters demonstrated superior performance compared to other methods. This combination required only 5% of the pre-trained language model to be fine-tuned. The proposed method enhances parameter efficiency and reduces computational costs. Compared to the full fine-tuning of the multilingual pre-trained language model, it showed only a 3% difference in the BLEU score. Thus, nearly the same performance was achieved at a significantly lower cost. Additionally, models using only bottleneck adapters performed worse despite having a higher parameter count. Although adding LoRA to pre-trained language models alone did not yield sufficient performance, the proposed method improved machine translation. The results obtained are promising, particularly for low-resource language pairs. The proposed method requires less memory and computational load while maintaining translation quality.
Keywords: natural language processing; transformer; large language model; neural machine translation; parameter-efficient fine-tuning; adapter natural language processing; transformer; large language model; neural machine translation; parameter-efficient fine-tuning; adapter

Share and Cite

MDPI and ACS Style

Sel, I.; Hanbay, D. Efficient Adaptation: Enhancing Multilingual Models for Low-Resource Language Translation. Mathematics 2024, 12, 3149. https://doi.org/10.3390/math12193149

AMA Style

Sel I, Hanbay D. Efficient Adaptation: Enhancing Multilingual Models for Low-Resource Language Translation. Mathematics. 2024; 12(19):3149. https://doi.org/10.3390/math12193149

Chicago/Turabian Style

Sel, Ilhami, and Davut Hanbay. 2024. "Efficient Adaptation: Enhancing Multilingual Models for Low-Resource Language Translation" Mathematics 12, no. 19: 3149. https://doi.org/10.3390/math12193149

APA Style

Sel, I., & Hanbay, D. (2024). Efficient Adaptation: Enhancing Multilingual Models for Low-Resource Language Translation. Mathematics, 12(19), 3149. https://doi.org/10.3390/math12193149

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop