Next Article in Journal
Robust Zero Watermarking Algorithm for Medical Images Based on Improved NasNet-Mobile and DCT
Previous Article in Journal
Quantification and Analysis of Carrier-to-Interference Ratio in High-Throughput Satellite Systems
 
 
Article
Peer-Review Record

Two-Branch Feature Interaction Fusion Method Based on Generative Adversarial Network

Electronics 2023, 12(16), 3442; https://doi.org/10.3390/electronics12163442
by Rong Chang 1, Junpeng Dang 1, Nanchuan Zhang 1, Shan Zhao 2,*, Shijin Hu 2,*, Lin Xing 2, Haicheng Bai 3, Chengjiang Zhou 2 and Yang Yang 2
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4:
Reviewer 5:
Electronics 2023, 12(16), 3442; https://doi.org/10.3390/electronics12163442
Submission received: 31 May 2023 / Revised: 25 June 2023 / Accepted: 29 June 2023 / Published: 15 August 2023

Round 1

Reviewer 1 Report

The paper addresses an important issue. The paper is well written, but I consider the paper is too long, but a the same time, the authors should separate the discusion from the conclusions. That is, there should be an improved discusion section separated from the conclusions.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors propose a two-branch feature interactions method based generative adversarial network for visible and infrared image fusion. Also, they provided a comparison experiments with state-of-the-art methods with their method highlighting its useful information when visible images are disturbed by noise information.

A few suggestions for improvement :

-        A brief paragraph to introduce evaluation methods in Section 4.1.2 metrics should be added.

-        It will be good to discuss the results in a comparaison way with previous paper , also I suggest to discuss the limitations and further improvement of your method developed.

Good luck

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

1. In most of the references the year of publication is not declared; ¿Why this? 2. Go a little deeper into the subject in the introduction section. 3. Describe in more depth the results obtained from the investigation and make a more detailed comparison with the results that have already been published in recent investigations.

 

Minor editing of English language required

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 4 Report

19/06/2023

Dear authors,

 

In the manuscript Two-branch Feature Interaction Fusion Method based on Generative Adversarial Network you propose a two-branch feature interactions method based generative adversarial network for visible and infrared image fusion. You also show extensive comparison experiments with state-of-the-art methods which demonstrate the advantage of your method in enhancing useful information when visible images are disturbed by noise information.

General comments

The Abstract has a lot of general and bombastic phrases, but no specific information about the results of your method. In the Abstract, you should state the motivation, a short statement about what your method brings and the level of improvement of previous methods.

The study is interesting and has the potential in image processing and interpretation. Further on, such manuscripts should be written in the third person. You have mentioned the word 'we' 32 and word ‘our’ 62 times in the text. This greatly irritates the reader. So, you need to change this throughout the text. The paper is not written according to the instructions for authors and follows the form of the proposed form. It is therefore more difficult to follow the course of events in the manuscript.

Given that fact, the Introduction looks too short and lacks more references and information about the area in which you are introducing a new method, and the review of current methods is too long.

There are too many 'random' procedures (such as image selection) and 'empirical' settings in your methodology that are not explained in some way. There are too few different scenes in the pictures to make such a bombastic conclusion in the first sentence of the Abstract.

General note, rearrange the entire manuscript according to the given pattern to make it easier to read. Some parts of the manuscript are more like a chapter in a book. Due to the fact that the manuscript is written in a confused manner, it is not structured according to the recommendations and form for authors, Discussion and Conclusion is unacceptable. All individual results should be interpreted in Discussion, and all results and benefits of the proposed method should be highlighted in Conclusion

 

Specific comments (are in the manuscript)

-          Line 1 – A very bombastic start. However, is that completely true?! And, you don't mention the influence of noise information in the title!

-          Line 6 - Such type of manuscript should be written in the third person. So, please change this throughout the text.

-          Lines 17-22 - These claims should be supported by references, regardless of whether they are known to experts in the field you are dealing with.

-          Lines 23-25 - You should move this sentence to the end of the Introduction, after you show the state of the area.

-          Lines 149-154 - This text should be moved to the end of the Introduction.

-          Lines 260-261 - How did you come up with these numbers? You should explain, not just state 'empirically'.

-          Line 380-381 - This has been repeated many times and has no place here.

 

Best regards

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 5 Report

The paper is devoted to a two-branch feature interaction fusion network for visible and infrared images based on a GAN. The method which proposed authors preserves edge feature and texture information from both images, filters noise influence, and enhances the guiding ability of feature extraction and improves texture detail and contrast in visible images with noise. The paper is well written, and a test comparison was made using different methods on a large data set.  The paper shows promising results and provides a useful contribution to image fusion research. There are small comments to the authors:

1) Figure 1 is a general view of figures 2 and 3, what is the justification for using the same picture several times?
2) In subsections 2.2 and 2.3 it would be nice to depict the structures of methods (CNN, GAN)
3) The conclusion section must be expanded. Add numerical estimates of the results obtained.

 Line 172: infared --> infrared

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 4 Report

28/06/2023

Dear authors,

 

In the manuscript Two-branch Feature Interaction Fusion Method based on Generative Adversarial Network you propose a two-branch feature interactions method based generative adversarial network for visible and infrared image fusion. You also show extensive comparison experiments with state-of-the-art methods which demonstrate the advantage of your method in enhancing useful information when visible images are disturbed by noise information.

General comments

You answered all my comments and questions. I am mostly satisfied with them and have no major complaints.Here you need to highlight your conclusions based on all the results.

  

Best regards

Back to TopTop