Next Article in Journal
Sensor Fusion for Occupancy Estimation: A Study Using Multiple Lecture Rooms in a Complex Building
Previous Article in Journal
Investigating Machine Learning Applications in the Prediction of Occupational Injuries in South African National Parks
 
 
Article
Peer-Review Record

Factorizable Joint Shift in Multinomial Classification

Mach. Learn. Knowl. Extr. 2022, 4(3), 779-802; https://doi.org/10.3390/make4030038
by Dirk Tasche
Mach. Learn. Knowl. Extr. 2022, 4(3), 779-802; https://doi.org/10.3390/make4030038
Submission received: 6 August 2022 / Revised: 6 September 2022 / Accepted: 7 September 2022 / Published: 10 September 2022
(This article belongs to the Section Learning)

Round 1

Reviewer 1 Report

The authors have presented a well-defined paper regarding the Factorizable Joint Shift in Multinomial Classification. However, I have noticed that few words are repeated multiple times consecutively. I would advise the authors to check this issue and do the spell and grammar checks.  

Author Response

Dear reviewer,

Thank you for your comment. 

I have corrected a number of typos of the type you mention.

Please see the attached track-changed version for the amendments.

Best regards,
Dirk Tasche

Author Response File: Author Response.pdf

Reviewer 2 Report

This article, for multinomial classification, explores the relation between factorizable joint shift and dataset shift, specifically covariate shift and prior probability shift. It concludes than “factorizable joint shift is not much more general than prior probability shift or covariate shift”.

 

Through the article the author makes a robust theoretical research about several aspects of dataset shift. The author provides several theorems that explores the relation and characteristics of factorizable joint shift, covariate shift and prior probability shift.

 

It could be helpful if the author provides some guides or limitations for use in practical situations.

Author Response

Dear reviewer,

Thank you for your comment. 

Admittedly, my paper is more about warning about the deficiencies of the method proposed by He et al. than about presenting something better. The approaches 
I discuss in Section 4.1 explicitly require making additional assumptions, in contrast to the Joint Importance Aligning by He et al. 
which forces the solution away from covariate shift by a somewhat arbitrary regularization. 

Nonetheless, I have inserted in the introduction some wording on why the Joint Importance Aligning proposed by He et al. should only be 
deployed with caution.

Please see the attached track-changed version for the amendments.

Best regards,
Dirk Tasche

Author Response File: Author Response.pdf

Reviewer 3 Report

General Overview of the paper

-          This paper presents a revision of ‘factorizable joint shift’ introduced by He et al. The authors proved that the “factorizable joint shift” is not much more general than the prior probability shift or covariate shift. The paper is well-written, and all propositions were theoretically proven. However, I found the following limitations which are better to address before publishing the paper:

 

-          In the abstract, we have to briefly summarize in addition to the authors’ contribution and the used methods, the authors have to point out the finding results. This is because the abstract is the only part of the paper that readers see when they search through electronic databases.

-          Avoid using “we” pronoun when you write an academic paper. Use instead of that the passive voice style.

-          An the beginning of the paper, the author wrote “In machine learning jargon”. I don’t prefer to use the word “jargon” because it looks like not an academic word. Instead of that, use e.g., “technical terms” or “in machine learning field…”.

-          Even the paper is theoretical. The paper does not provide non specialist readers with enough information about the dataset draft and the existing methods.

-          The paper lacks related works and background.

-          The authors do not demonstrate how the proposed model helps in improving the performance of ML(show the performance of any ML-based model using any benchmark dataset and compare the proposed approach with He et al.’s approach).

-          Replace “arXiv preprint” with the full reference!   

Author Response

Dear reviewer,

Thank you for your comments. Below I explain how I have taken them into account for the revision of the paper.

Your comment on the abstract:
I have amended the abstract, giving more detail on the findings of the paper. Please see the attached track-changed version for the amendments.

Your comment on using "we": 
By my opinion, the use of "we" (as plural of modesty) is a matter of taste. Obviously, I prefer the "we"-style. It might be up to the editors 
to decide which style is most appropriate for the journal.

Your comment on "jargon":
Changed to "terminology".

Your comment on "non-specialist readers": 
By my opinion, it is not the job of a research paper to provide much background information on the topic of the paper. Nonetheless, I changed the way in which 
I quoted the first two references in the introduction to make it clear that background information on dataset shift can be found there.
Please see the attached track-changed version for the amendments.

Your comment on related works and background:
As a matter of fact, the by far most related work is the paper by He et al. I have inserted two additional paragraphs in the introduction in order to better explain
the connection to that paper. Please see the attached track-changed version for the amendments.

Your comment on demonstrating how the proposed model helps in improving the performance of ML:
Admittedly, my paper is more about warning about the deficiencies of the method proposed by He et al. than about presenting something better. The approaches 
I discuss in Section 4.1 explicitly require making additional assumptions, in contrast to the Joint Importance Aligning by He et al. 
which forces the solution away from covariate shift by a somewhat arbitrary regularization. In so far, the improvement is more on the conceptual than on 
the practical side such that it would not make much sense to talk about improved performance and to try to numerically demonstrate it.

Your comment on the arXiv references:
I changed the references accordingly.

Best regards,
Dirk Tasche

Author Response File: Author Response.pdf

Back to TopTop