Next Article in Journal
The Use of Construction Waste to Remediate a Thermally Active Spoil Heap
Next Article in Special Issue
The Detection of Fake News in Arabic Tweets Using Deep Learning
Previous Article in Journal
Predicting Decision-Making in Virtual Environments: An Eye Movement Analysis with Household Products
Previous Article in Special Issue
T5-Based Model for Abstractive Summarization: A Semi-Supervised Learning Approach with Consistency Loss Functions
 
 
Article
Peer-Review Record

An Optimized Approach to Translate Technical Patents from English to Japanese Using Machine Translation Models

Appl. Sci. 2023, 13(12), 7126; https://doi.org/10.3390/app13127126
by Maimoonah Ahmed 1, Abdelkader Ouda 1,*, Mohamed Abusharkh 2, Sandeep Kohli 3 and Khushwant Rai 3
Reviewer 1: Anonymous
Reviewer 2:
Appl. Sci. 2023, 13(12), 7126; https://doi.org/10.3390/app13127126
Submission received: 29 April 2023 / Revised: 7 June 2023 / Accepted: 9 June 2023 / Published: 14 June 2023
(This article belongs to the Special Issue Applied Intelligence in Natural Language Processing)

Round 1

Reviewer 1 Report

The paper targets translation patents from English to Japanese. It seems that the best result was achieved by successful finding of pretrained model on Hugging Face site with subsequent finetuning.

The paper seems very extented   with small amount of original knowledge. It seems like lecture notes for students.  Efforts need to be devoted to transform the text to research paper.  For example, section 4. ("Current Applications") could be dropped, because its connection to provided research is weak. 

Description of transformers architecture with citation to reference 47 better fit Section 2 as prospective NMT method. 

I think that conclusions must clearly state who brought most significant contribution to the best system: pretraining or finetuning?

Some minor points:

187 line - "UTF8min"?

199 line - twice empty quotation marks.

 

 

 

 

Author Response

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

This paper aims to improve the translation process of English to Japanese translation especially for technical context.

In methodology part:

'The aim of this research is to ultimately improve the accuracy of translating technical 468 patents, belonging to different domains, from English to Japanese. Since MT systems 469 such as Google Translate use generic data for translation, translating highly technical and 470 domain-specific sentences often faces many problems. Some of these problems include 471 lexical ambiguity and sentence structure as discussed earlier in Section II. 472 We evaluated different open-source MT models with varying parameters to study and 473 analyze the performances of each model. Analyzing existing models allowed us to avoid 474 reinventing the wheel while also allowing us to stand on the shoulders of past MT research 475 to gain integral insights for the task of translating complex language structures. Once an 476 initial analysis of a model’s performance was completed, we implemented a multi-step 477 approach to fine-tune three models to improve accuracy in patent translation. Figure 2 478 provides a depiction of the activity diagram of our methodology behind fine-tuning a 479 machine translation model. ' The authors talk about the aim of their study in this part, however some of the terms are not clear for example; which of the three models will be improved. What are they?

2) Why the contributions and aims of this study is not included in introduction part?

3) from introduction part to methodology, authors talk about relevant works kind of literature review, but no hightlights of importance of the current work and unnecessariliy long .

my advice;

Please

*include contributions as bullet points in introduction section,

*reduce the unnecessary content, be concise 

*make more clear figure to show what are the models and experimental settings

*provide more information regarding nlp model III result, why it is better I can see the parameters are same?

*when you use different datasets, it is expected to see a different performance so what is the logic behind using different  datasets in each model nlp -I-II and III.

 

 

 

 

Author Response

 

 

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The paper after revision is improved. An abstract could more talk about used method and achieved results than motivation for research.

Author Response

Please see the attachment.

 

 

Author Response File: Author Response.pdf

Back to TopTop