Auto-Probabilistic Mining Method for Siamese Neural Network Training
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors introduced the Auto-probabilistic Mining Method for Siamese Neural Networks Training. the novelty of the work is marginal and needs some improvements other issues are as follows:
1. The abstract section should be reorganized. It must start with the research problem, gaps, motivation(s), used techniques, novelty, and achievements, rather than suddenly presenting the proposed model.
2. None of the Keywords has been used in the abstract section.
3. in the first page of the introduction section, The authors should cite some papers for the studied concepts used.
4. In the Abstract section, the definition of OCR, “ OCR is a task where neural networks typically have as many neurons in the last layer as there are distinguishable classes, “ is ambiguous and inaccurate. Authors must have a precise and scientific definition of the concepts used.
5. In the last paragraph of the introduction section, authors should provide more descriptions of their proposed method.
6. The title of the figure 2 should have more details. Also, authors should cite the research in which they used this figure.
7. The parameters and variables used in Equations 1 and 2 should be defined.
8. Most of the papers used in the related work section are old. Authors should add and review more recent papers.
9. The research contributions and organization should be added at the end of the introduction section. Also, the novelty of the proposed method should be defined in more detail.
10. In section 3.2, f(xi), and (f(xi) , Ci) are defined but not used in Equations 3 and 4. Also, “P” is not defined.
11. The reason behind using Equation 5 as the metric should be defined. Also at the beginning of section 3.4 after the term “ loss” some words such as function or metric should be added. Besides, it seems that authors should use “ loss metric” instead of the “metric loss”.
12. In section 4.1.1 the reason behind using the PHD08 dataset is just its use in previous works. This is not enough. The authors should add more reasons for choosing this dataset.
13. In section 4.2 the authors used 60 percent of the data as training data. And 20 percent for both the validation and test process. How these values are obtained? Only by chance or the specific parameter tuning methods? Also, the same question is for other parameters like the number of epochs and learning rates.
14. The name and quality of the figure 6 must be improved.
15. Authors should use more performance metrics to validate their model.
16. Authors should add a discussion section to analyze the reasons behind the superiority of their proposed method over some existing methods.
17. English writing should be improved.
18. adding some future works is suggested.
Comments on the Quality of English Language
it should be improved
Author Response
Comments 1: The abstract section should be reorganized. It must start with the research problem, gaps, motivation(s), used techniques, novelty, and achievements, rather than suddenly presenting the proposed model.
Response 1: Thank you for pointing this out. I agree with this comment. Therefore, I have reorganized the abstract to start with the research problem, existing gaps, motivations, and techniques used, followed by the novelty of the proposed model and the key achievements.
Comments 2: None of the Keywords has been used in the abstract section.
Response 2: Thank you for pointing this out. I agree with this comment. Therefore, I have revised the abstract to incorporate the keywords, ensuring they are properly referenced in the context of the research.
Comments 3: in the first page of the introduction section, The authors should cite some papers for the studied concepts used.
Response 3: I agree with this comment. To address this, I have added citations to relevant papers that support the concepts discussed in the introduction section.
Comments 4: In the Abstract section, the definition of OCR, “ OCR is a task where neural networks typically have as many neurons in the last layer as there are distinguishable classes, “ is ambiguous and inaccurate. Authors must have a precise and scientific definition of the concepts used.
Response 4: Thank you for your valuable feedback. I agree with this comment and have revised the definition of OCR to provide a more precise and scientific explanation in the abstract.
Comments 5: In the last paragraph of the introduction section, authors should provide more descriptions of their proposed method.
Response 5: Thank you for your suggestion. I agree with this comment and have added more detailed descriptions of the proposed method in the last paragraph of the introduction section. The updated version now provides a clearer explanation of how the proposed methods work.
Comments 6: The title of the figure 2 should have more details. Also, authors should cite the research in which they used this figure.
Response 6: Thank you for pointing this out. I agree with your comment and have revised the title of Figure 2 to provide more detailed information about its content. Since this figure is my own creation, I have also updated the figure description accordingly. The updated title can be found under Figure 2 on page 3 of the revised manuscript.
Comments 7: The parameters and variables used in Equations 1 and 2 should be defined.
Response 7: Thank you for pointing this out. I agree with your comment. Therefore, I have defined the parameters and variables used in Equations 1 and 2 in the revised manuscript. Specifically, the definition of each parameter has been added to the corresponding sections immediately after the equations. (page 3)
Comments 8: Most of the papers used in the related work section are old. Authors should add and review more recent papers.
Response 8: Thank you for your valuable feedback. I agree with your comment. Therefore, I have reviewed and included more recent papers in the related work section to ensure the coverage of the latest advancements in the field. Some of them is "Mathematical Justification of 409
Hard Negative Mining via Isometric Approximation Theorem", 2022.
Comments 9: The research contributions and organization should be added at the end of the introduction section. Also, the novelty of the proposed method should be defined in more detail.
Response 9: Thank you for your suggestion. I agree with your comment. Therefore, I have added a section at the end of the introduction outlining the research contributions and the organization of the paper. Additionally, I have provided a more detailed description of the novelty of the proposed method, emphasizing its unique aspects and contributions to the field. These changes can be found in the revised manuscript on the page 2.
Comments 10: In section 3.2, f(xi), and (f(xi) , Ci) are defined but not used in Equations 3 and 4. Also, “P” is not defined.
Response 10: Thank you for pointing this out. I agree with your comment. In the revised manuscript, I have clarified the use of f(xi) and (f(xi),Ci) in Equations 3 and 4. Additionally, I have defined the term "P" to ensure the clarity of the equations. Specifically, P refers to the probability distribution calculated using the distances between the input vector and the class centers.
Comments 11: The reason behind using Equation 5 as the metric should be defined. Also at the beginning of section 3.4 after the term “ loss” some words such as function or metric should be added. Besides, it seems that authors should use “ loss metric” instead of the “metric loss”.
Response 11: Thank you for pointing this out. I understand that there may have been some confusion regarding the term "metric loss." To clarify, in the revised manuscript, I have provided additional context to make the term "metric loss" more intuitive.
Comments 12: In section 4.1.1 the reason behind using the PHD08 dataset is just its use in previous works. This is not enough. The authors should add more reasons for choosing this dataset.
Response 12: Thank you for your comment. We agree with your observation. The PHD08 dataset was selected not only due to its usage in previous works but also because of its relevance to the task at hand. The dataset is particularly suitable for our research due to the high visual similarity between certain Korean characters, which presents a significant challenge for OCR tasks. This characteristic makes it a valuable resource for evaluating the performance of our proposed method. We have added this explanation in the revised manuscript on the page 2, specifically Fig.1.
Comments 13: In section 4.2 the authors used 60 percent of the data as training data. And 20 percent for both the validation and test process. How these values are obtained? Only by chance or the specific parameter tuning methods? Also, the same question is for other parameters like the number of epochs and learning rates.
Response 13: Thank you for your comment. The choice of using 60% of the data for training and 20% for both validation and testing was taken directly from the referenced work, where this data split was adopted. Other parameters were taken from experiments dealing with PHD08 dataset.
Comments 14: The name and quality of the figure 6 must be improved.
Response 14: Thank you for your feedback. We agree with your suggestion. We have improved the quality of Figure 6 by enhancing its resolution and clarity.
Comments 15: Authors should use more performance metrics to validate their model.
Response 15: Thank you for your comment. We acknowledge the importance of using diverse performance metrics in model evaluation. However, for the scope of this study, we have chosen accuracy as the primary metric. This decision is justified by the nature of the task, where accuracy is widely accepted and sufficient for evaluating performance. Additionally, accuracy aligns with the metrics used in the studies we compare against, ensuring consistency in benchmarking.
Comments 16: Authors should add a discussion section to analyze the reasons behind the superiority of their proposed method over some existing methods.
Response 16: Thank you for this valuable suggestion. While our manuscript originally included a discussion of the proposed method's superiority, we agree that further elaboration strengthens the analysis. We have expanded the discussion section to provide a more detailed analysis of the factors contributing to the method’s performance. Specifically, we now highlight the role of the Cluster-Aware Metric Loss and the auto-probabilistic mining strategy in enhancing class separation and their comparative advantages over existing approaches.
Comments 17: English writing should be improved.
Response 17: Thank you for the suggestion. We have carefully reviewed and refined the language throughout the manuscript to improve clarity, precision, and readability. Specific areas with potential for improvement were addressed, and the revised manuscript now reflects these changes. We hope the updated version meets the expected standard of English writing.
Comments 18: adding some future works is suggested.
Response 18: Thank you for the suggestion. We have added a dedicated section on future work to outline potential directions for extending our research. This includes exploring the application of the proposed methods to one-shot learning tasks and reidentification problems, as well as further investigating their performance on larger and more diverse datasets. These additions can be found at the end of the conclusion section (Page 11, Paragraph 2).
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe document "Auto-probabilistic Mining Method for Siamese Neural Networks Training" presents an interesting proposal. However, the document must be rewritten to transmit the contribution and better explain the proposal's details.
- The Introduction is too long and contains concepts and diagrams that must appear in another section. In addition, it is not clear to me what your research contributes and how it differs from previous work.
- Lines 191-192. You are mixing two different ideas: the characteristics of images and the execution parameters (epochs and iterations). Please first focus only on the dataset's description, and then, you can explain the execution parameters.
- Subsection 4.3 Augmentation: It could be helpful to include the number of images in your datasets after applying data augmentation.
- Section 3. Suggested method. I recommend moving this section after the description of the datasets to first understand the problem.
- Lines 144-145: Why do you talk about positive or negative class? Your problem has many classes.
- Equations 3 and 4: I need help understanding how you use the images in these equations.
- Section 3. It appears to be a good idea, but I recommend to explain better.
- How many images do you use for training, validating, and testing?
- Were all your datasets balanced? It could be helpful to have the number of samples per class or a histogram to analyze if you have a uniform distribution of samples per class.
Author Response
Comments 1: The Introduction is too long and contains concepts and diagrams that must appear in another section. In addition, it is not clear to me what your research contributes and how it differs from previous work.
Response 1: Thank you for the detailed feedback. We agree that the introduction section could benefit from a more focused structure. Therefore, we have shortened the introduction by removing certain figures, equations and transferring them to the relevant related work section. Additionally, we have refined the description of our research contributions and clarified how our work differs from previous studies. These revisions can be found in the introduction section (Page 2) and the related work section (Page 3).
Comments 2: Lines 191-192. You are mixing two different ideas: the characteristics of images and the execution parameters (epochs and iterations). Please first focus only on the dataset's description, and then, you can explain the execution parameters.
Response 2: Thank you for pointing out this issue. We agree that the description mixes two distinct ideas, which could confuse the reader. We have revised this section to separate the dataset description and the explanation of execution parameters. The dataset characteristics are now discussed first, followed by the execution parameters. This adjustment improves clarity and ensures a logical flow. The changes can be found on Page 5 (Datasets section).
Comments 3: Subsection 4.3 Augmentation: It could be helpful to include the number of images in your datasets after applying data augmentation.
Response 3: Thank you for the suggestion. We understand the importance of providing clarity regarding the dataset size after augmentation. However, in our case, data augmentation was applied on-the-fly during the training process rather than generating a fixed augmented dataset beforehand. This approach dynamically increases the diversity of training samples without requiring additional storage. We have clarified this in Subsection 3.3 on Page 6, Lines 189-190.
Comments 4: Section 3. Suggested method. I recommend moving this section after the description of the datasets to first understand the problem.
Response 4: Thank you for the suggestion. We agree with this comment and have accordingly swapped Section 3 (Suggested Method) and Section 4 (Datasets). This change ensures that the problem context is introduced before the proposed method. The updated structure can be found in the revised manuscript.
Comments 5: Lines 144-145: Why do you talk about positive or negative class? Your problem has many classes.
Response 5: Thank you for pointing this out. We agree with this comment and have made the necessary adjustments. We removed the terms 'positive class' and 'negative class' and replaced them with 'positive and negative pairs' and 'triplets,' which better represent 'samples' and more accurately reflect our problem.
Comments 6: Equations 3 and 4: I need help understanding how you use the images in these equations.
Response 6: Thank you for your comment. We appreciate your feedback and understand the need for clarification. In Equations 3 and 4, the images, represented by , are processed through the model f to compute feature vectors f(x_i), which are then used to calculate the distances between samples and their respective class centroids. These distances are subsequently used to compute the probabilities or loss values as described in the equations. We have revised the explanation in the manuscript to make the role of images in these equations clearer.
Comments 7: Section 3. It appears to be a good idea, but I recommend to explain better.
Response 7: Thank you for your suggestion. We agree that further clarification is beneficial. In Section 4 (previously 3), we have expanded the explanation of the proposed method to provide a more detailed and clear description of its working principles and steps. We have elaborated on the key components and their relationships, ensuring a better understanding of the method. Please refer to the revised Section 4 (previously 3) for the updated explanation.
Comments 8: How many images do you use for training, validating, and testing?
Response 8: Thank you for your question. In the experiments, we used a standard dataset split for training, validation, and testing. Specifically, for the Omniglot dataset, 60% of the images were used for training, while 20% were reserved for validation and 20% for testing. This division was adopted from previous works cited in the manuscript.
Comments 9: Were all your datasets balanced? It could be helpful to have the number of samples per class or a histogram to analyze if you have a uniform distribution of samples per class.
Response 9: Thank you for your suggestion. In our case, the PHD08 dataset is approximately balanced, and the Omniglot dataset is fully balanced. However, it is important to note that the balance of the dataset does not significantly impact the results, as pairs and triplets are generated during the training process based on the specific mining method used, which influences the overall balance. This detail has now been clarified in the revised manuscript.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe paper introduces a novel "Auto-probabilistic Mining Method for Siamese Neural Networks Training" and highlights its application to Siamese neural networks. While the paper is generally well-written and the proposed idea is promising, several key issues must be addressed to enhance its quality and ensure potential publication:
-
The abstract lacks clarity in highlighting the challenges being addressed and does not clearly define major problems in existing work. Additionally, the statement “highlighting its impact on overall performance” is vague. Please specify the types of impact, such as a percentage improvement in accuracy or a reduction in computation cost, and include quantitative results to substantiate the proposed methodology's effectiveness.
-
Define the research gap explicitly and describe how the proposed method addresses limitations in existing OCR systems and metric learning approaches.
-
Revise the related work section to include more recent studies, ideally not older than 2020, to ensure the relevance of the literature review.
-
Include a detailed discussion of the system configuration and implementation specifics of the proposed model for better reproducibility.
-
Expand the experimental section to include additional parameters for deeper analysis of the proposed method. Compare its results comprehensively with the latest state-of-the-art methods to validate its performance.
- Better to include an algorithm of the proposed technique and define the problem in the methodology section with mathematical formulation. In the current version, i did'nt find any mathematical formulation of the proposed method.
-
Discuss the challenges encountered during this research and outline directions for future work to strengthen the conclusion.
-
The paper contains numerous typographical and grammatical errors. For instance, revise phrases like “a mainstream” to “mainstream” for improved readability. Overall, the writing needs thorough proofreading and refinement.
-
Revise all figures to ensure high resolution for better readability, as the text in some figures is currently unclear. Provide more descriptive captions for figures and detailed titles for tables to enhance understanding.
-
Clearly explain all abbreviations and symbols used in equations. Ensure consistency in the use of keywords and abbreviations throughout the manuscript.
-
Ensure that all relevant works are cited. For instance, consider referencing studies focusing on training with limited data, such as: An efficient zero-labeling segmentation approach for pest monitoring on smartphone-based images.
Must be improve
Author Response
Comments 1: The abstract lacks clarity in highlighting the challenges being addressed and does not clearly define major problems in existing work. Additionally, the statement “highlighting its impact on overall performance” is vague. Please specify the types of impact, such as a percentage improvement in accuracy or a reduction in computation cost, and include quantitative results to substantiate the proposed methodology's effectiveness.
Response 1: Thank you for your comment. We agree with this feedback and have revised the abstract to better highlight the challenges being addressed, clearly define the major problems in existing work, and specify the impact of our proposed methodology. The statement regarding "highlighting its impact on overall performance" has been updated to include quantitative results, such as a percentage improvement in accuracy. These changes can be found in the updated abstract on page 1.
Comments 2: Define the research gap explicitly and describe how the proposed method addresses limitations in existing OCR systems and metric learning approaches.
Response 2: Thank you for your suggestion. We agree with your comment and have revised the introduction to explicitly define the research gap and describe how our proposed methods addresses the limitations in existing OCR systems and metric learning approaches. Specifically, we highlight the challenges posed by the limitations of current loss functions, such as contrastive loss and triplet loss, as well as mining methods. Our methods addresses these challenges by introducing a novel Cluster-Aware Metric Loss (CAML) and auto-probabilistic mining method. Together, these approaches enhance classification accuracy, improve sample mining efficiency. This clarification can be found in the revised introduction, page 2, lines 38-60, 82-89, 100-115.
Comments 3: Revise the related work section to include more recent studies, ideally not older than 2020, to ensure the relevance of the literature review.
Response 3: Thank you for your valuable feedback. I agree with your comment. Therefore, I have reviewed and included more recent papers in the related work section to ensure the coverage of the latest advancements in the field. Some of them is "Mathematical Justification of Hard Negative Mining via Isometric Approximation Theorem", 2022.
Comments 4: Include a detailed discussion of the system configuration and implementation specifics of the proposed model for better reproducibility.
Response 4: Thank you for your suggestion. We agree with this comment and have included a more detailed discussion of the system configuration and implementation specifics to improve the reproducibility of our work.
Comments 5: Expand the experimental section to include additional parameters for deeper analysis of the proposed method. Compare its results comprehensively with the latest state-of-the-art methods to validate its performance.
Response 5:
Comments 6: Better to include an algorithm of the proposed technique and define the problem in the methodology section with mathematical formulation. In the current version, i did'nt find any mathematical formulation of the proposed method.
Response 6: Thank you for pointing this out. We agree with your comment and have made the necessary improvements. We have added an algorithm describing the proposed technique to make the methodology clearer. Additionally, we have included the mathematical formulation of the method to define the problem more rigorously. These additions can be found in Section 4, where the proposed method and its mathematical foundation are now explicitly outlined. We believe this will improve the clarity and rigor of the manuscript.
Comments 7: Discuss the challenges encountered during this research and outline directions for future work to strengthen the conclusion.
Response 7: Thank you for your comment. We agree with your suggestion and have enhanced the conclusion section. We have also outlined potential directions for future work, providing insight into possible improvements and further exploration in the field.
Comments 8: The paper contains numerous typographical and grammatical errors. For instance, revise phrases like “a mainstream” to “mainstream” for improved readability. Overall, the writing needs thorough proofreading and refinement.
Response 8: Thank you for the suggestion. We have carefully reviewed and refined the language throughout the manuscript to improve clarity, precision, and readability. Specific areas with potential for improvement were addressed, and the revised manuscript now reflects these changes. We hope the updated version meets the expected standard of English writing.
Comments 9: Revise all figures to ensure high resolution for better readability, as the text in some figures is currently unclear. Provide more descriptive captions for figures and detailed titles for tables to enhance understanding.
Response 9: Thank you for your feedback. We agree with this comment and have taken the necessary steps to improve the figures and tables. All figures have been revised to ensure high resolution, enhancing their readability and clarity. Additionally, we have updated the captions of the figures and provided more detailed titles for the tables to improve their comprehensibility and context. These changes can be found in the revised manuscript.
Comments 10: Clearly explain all abbreviations and symbols used in equations. Ensure consistency in the use of keywords and abbreviations throughout the manuscript.
Response 10: Thank you for pointing this out. We agree with this comment and have carefully reviewed the manuscript. All abbreviations and symbols used in the equations have been clearly defined and explained. Additionally, we have ensured consistency in the use of keywords and abbreviations throughout the manuscript to avoid confusion. These revisions can be found in the relevant sections of the manuscript where the equations are introduced.
Comments 11: Ensure that all relevant works are cited. For instance, consider referencing studies focusing on training with limited data, such as: An efficient zero-labeling segmentation approach for pest monitoring on smartphone-based images.
Response 11: Thank you for your suggestion. While we appreciate the relevance of the study you mentioned, we have not included it in this paper as it does not directly align with the focus of our current work. However, we acknowledge its potential value and will certainly consider incorporating it in future works to provide a broader context.
Author Response File: Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe Authors addressed most of my concerns and improved their work. However, some minor issues must be resolved.
1. Two new papers should be added to the related work section.
2. Authors introduced and modified some equations. Why didn't these modifications affect the experiment results?
Author Response
Comments 1: Two new papers should be added to the related work section.
Response 1: Thank you for suggestion. We added two new papers to the related work section.
Comments 2: Authors introduced and modified some equations. Why didn't these modifications affect the experiment results?
Response 2: Thank you for pointing this out. Equations were modified but were not changed by meaning. They are currently represent the same experiments results.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe new version of the document is more explicit and better organized. I have some comments/suggestions:
- Lines 31-37. I recommend including references that validate the statements.
- Lines 44-49. Is SNN the only methodology that has tackled the problem of limited labeled data? The introduction is expected to compare several approaches that have previously been applied to similar problems or scenarios. I suggest including a brief description of documents 10, 18, 19, and 21 (the ones that appear in Table 2).
- Lines 151-153. It is strange that the beginning of Section 3 mentions the process of resizing images. I recommend starting with a brief introduction to datasets.
- Line 175. What is the meaning of AMP? It is recommended that the mining of abbreviations be indicated the first time they are used.
- Lines 178-187. How many images does the Omniglot dataset contain?
- Line 227. How the feature representation of the input image f(x_ij) is calculated?
- Line 264-265. What do a,p,n represent?
- Figure 6. Please use a bigger font size, the image must be clear at 100% zoom.
- Did you calculate the accuracy of the training, validation, and testing dataset? What were the values?
- Could you include the graph of the loss or accuracy values of training and validation relative to the epochs?
Author Response
Comments 1: Lines 31-37. I recommend including references that validate the statements.
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have added appropriate references to support the statements in Lines 31-37. The updated references can be found on page 1.
Comments 2: Lines 44-49. Is SNN the only methodology that has tackled the problem of limited labeled data? The introduction is expected to compare several approaches that have previously been applied to similar problems or scenarios. I suggest including a brief description of documents 10, 18, 19, and 21 (the ones that appear in Table 2).
Response 2: Thank you for your comment. The referenced works (documents 10, 18, 19, and 21) indeed utilize SNN-based approaches, which are discussed in more detail in the Related Work section. Given this, we believe the current introduction sufficiently reflects the context of our study. However, if further clarification is required, we would be happy to make adjustments.
Comments 3: Lines 151-153. It is strange that the beginning of Section 3 mentions the process of resizing images. I recommend starting with a brief introduction to datasets.
Response 3: Thank you for your suggestion. We agree with this comment. To improve the logical flow of the manuscript, we have moved the discussion of image resizing to a more appropriate section and ensured that Section 3 begins with a brief introduction to the datasets. This change can be found in page 7 (Section Experimental setup).
Comments 4: Line 175. What is the meaning of AMP? It is recommended that the mining of abbreviations be indicated the first time they are used.
Response 4: Thank you for pointing this out. We agree with this comment. The abbreviation "AMP" has now been explicitly defined when first introduced in the manuscript to ensure clarity.
Comments 5: Lines 178-187. How many images does the Omniglot dataset contain?
Response 5: Thank you for your comment. The total number of images in the Omniglot dataset has been explicitly stated in the revised manuscript to improve clarity. This information can be found on page 6.
Comments 6: Line 227. How the feature representation of the input image f(x_ij) is calculated?
Response 6: Thank you for your question. The feature representation f(x_{ij}) is computed using the neural network model described in Section Experiments. We have clarified this in the revised manuscript by explicitly mentioning the architecture and processing steps involved.
Comments 7: Line 264-265. What do a,p,n represent?
Response 7: a, p, n represent anchor, positive, negative correspondingly. We replaced them by Anc, Pos, Neg in the manuscript.
Comments 8: Figure 6. Please use a bigger font size, the image must be clear at 100% zoom.
Response 8: We agree. The image on the figure 6 is currently improved.
Comments 9: Did you calculate the accuracy of the training, validation, and testing dataset? What were the values?
Response 9: Thank you for the question. Final accuracy values were presented in the Results Section, for example, Table 2.
Comments 10: Could you include the graph of the loss or accuracy values of training and validation relative to the epochs?
Response 10: Thank you for your suggestion. In this work, we focus on the final performance evaluation of the proposed method and its comparison with existing approaches. While graphs of loss and accuracy trends could provide additional insights, we will consider including them in future studies.
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsGreat effort! The revised manuscript is significantly improved and I have no further comments.
Author Response
Thank your for revision and feedback!