Next Article in Journal
Bilateral Adversarial Patch Generating Network for the Object Tracking Algorithm
Previous Article in Journal
The Effect of Grain Size on Hyperspectral Polarization Data of Particulate Material
 
 
Article
Peer-Review Record

Magnetotelluric Deep Learning Forward Modeling and Its Application in Inversion

Remote Sens. 2023, 15(14), 3667; https://doi.org/10.3390/rs15143667
by Fei Deng, Jian Hu *, Xuben Wang, Siling Yu, Bohao Zhang, Shuai Li and Xue Li
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Remote Sens. 2023, 15(14), 3667; https://doi.org/10.3390/rs15143667
Submission received: 22 June 2023 / Revised: 10 July 2023 / Accepted: 17 July 2023 / Published: 23 July 2023

Round 1

Reviewer 1 Report

In this paper, Mix-Transformer was used to hierarchically extract feature information, a multi-scale approach was adopted to restore feature information, and the skip connection between the encoder and decoder was eliminated. A forward modeling network model (MT-MitNet) oriented towards inversion was designed.  The results of the manuscript are positive. Some questions are as follows: 

1) How to choose the number of multi-scale convolution operations and the size of convolution kernel in Multiscale Self-Adaptive Module. In addition, a clear definition of the formula can help the reader better understand the structure of Multiscale Self-Adaptive Module.

2) It should be demonstrated by experiment whether the introduction of CBAM has a positive effect on the results

3) Can you give a reason for removing the skip connection for the encoder and decoder from the original Mit-NET?

4) The superiority of this paper's method should be demonstrated visually by an experimental comparison between the improved model proposed in this paper and the original model.

5) Can you please further explain what the multiple points searched for during the two iterations of the inversion in Figure 9 represent?

There some inappropriate language expressions, such as in the line 307, "changes better" , in the conclusions, "greater precision" .

Author Response

请参阅附件

Author Response File: Author Response.pdf

Reviewer 2 Report

The paper presents a novelty approach and can be acceptable after replying to the following comments.

My main concerns about the paper are double:

§  First, using Mix-Transformer is quite interesting to push a few computation times. If I am right, the purpose of the paper is not to enhance the MT inversion results but to improve the computation times when using the MiT for forward modeling compared to the traditional Occam inversion.  This is true because the first test on page 10 shows the RMS at 9th iterations indicates 1.805 and 1.88. There is not much difference and the structure is quite identic with no more improvement in interpretation. This is validated by Figure 9.  If that is the case, the author should spend their work on the times of computation using traditional Occam and using MiT for instance by generating a graph of iterations vs times since data and code are not available for testing the algorithm.

 

§  Second, Occam2d is an inversion software. When combining Occam with Mit-transformers, at a glance, it is obvious that the times should increase. For instance, let’s take Figure 6 about the MT-MitNet Occam inversion simplified process. When I read the paper, it seems the Occam2d algorithm is not rewritten and the MT-MitNet is not embedded in the core of the Occam program, so how could be possible to reduce the computation times? This is a challenge and I think the author must clearly explain this section. One more thing is that the data is not available for the reviewers to test the MT-MitNet algorithm for efficiency and evaluate the time-consuming compared to the traditional Occam. So, a clear and strong explanation must be given in this section to convince the reviewers. It does not mean that it is not feasible, but this section needs a clear explanation. Moreover, as explained on page 9 (lines 266-270), Fortran and C++ are both compiled languages and Fast. Using Lib Pythorch which implements an under-hood scenario from Python to C++, I’m curious to compare its rapidness vs Fortran.

 

My remains comments are in the Manuscript files.

Comments for author File: Comments.pdf

Minor editing of English language required

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

This work presents the construction of a novel forward network dataset for MT inversion and introduces MT-MitNet, a forward modeling network based on Mix-Transformer, improving accuracy and efficiency in inversion calculations for multiple anomalies. The manuscript offers a valuable contribution to the field and is suitable for publication in Remote Sensing. The introduction provides relevant context, the problem statement is well-formed, and the research objectives are clear. The methodology is comprehensive, and the results highlight the superior performance of MT-MitNet compared to previous algorithms. However, there are a few areas that require clarification and revision. Therefore, upon conducting a few revisions, the manuscript is worth publishing in the Remote Sensing. Here are my suggestions:

 

1. Provide more details about the experimental setup: The article mentions the dimensions of the simulated geoelectric model and the number of observation points, but it lacks information on other crucial aspects such as the training-validation-test split, the loss function used, the optimizer, the number of epochs, and any other relevant hyperparameters. These details are essential for reproducibility and for assessing the robustness of the proposed method. The authors should include this information or refer to supplementary materials if applicable.

2. Justify the choice of the Transformer-based architecture: The authors mention the limitations of RNN and CNN for MT inversion and propose the use of a Transformer-based architecture. However, they should provide more in-depth justification for this choice. Specifically, they should explain why a Transformer-based architecture is more suitable for capturing the global information and handling the characteristics of MT data.

3. Elaborate on the architecture and components of MT-MitNet: While the manuscript briefly describes the encoding and decoding modules of MT-MitNet, it lacks sufficient details to understand the model architecture fully. The authors should provide a more elaborate description of each module, including the input/output dimensions, the number of layers, and the configuration of the attention mechanisms and convolutional layers. This will enable readers to grasp the model's structure and functionality more clearly.

4. Explain the rationale behind the Multiscale Self-Adaptive Module: The description of the Multiscale Self-Adaptive Module is quite brief, and it is not entirely clear why this module is necessary and how it contributes to the overall model performance. The authors should provide a more detailed explanation of the rationale behind this module, highlighting its benefits and how it addresses the challenges specific to the MT inversion task.

5. Provide more details on the integration process: Elaborate on the challenges faced when integrating the Occam inversion program (C++) with the forward network model (Python). Explain the specific issues encountered and how they were addressed to enable successful deployment of the forward network model within the inversion program.

 

 

Please proofread the manuscript: There are several instances of unclear or awkward phrasing in the manuscript. The authors should carefully proofread the document to ensure clarity and coherence in the writing. For instance:

1. "Although forward modeling is time-intensive, the data (apparent resistivity, phase, and resistivity models) generated by this process are produced by the inversion algorithm, providing high-precision forward modeling samples for deep learning-based forward modeling networks." - This sentence is quite long and convoluted. Consider breaking it down into shorter, clearer sentences for better readability and comprehension.

2. "Although forward modeling is time-intensive, the data (apparent resistivity, phase, and resistivity models) generated by this process are produced by the inversion algorithm, providing high-precision forward modeling samples for deep learning-based forward modeling networks." - This sentence is quite long and convoluted. Consider breaking it down into shorter, clearer sentences for better readability and comprehension.

3. "The CBAM is a lightweight attention module that can conduct attention operations in both spatial and channel dimensions, thereby accentuating the significance of crucial feature information." - The phrase "accentuating the significance of crucial feature information" could be rewritten to be more concise and clearer.

4. "This process effectively prevents redundancy in parameters and excessive computations while enabling each convolution to extract features of the appropriate scale efficiently." - The phrase "extract features of the appropriate scale efficiently" could be clarified to provide a better understanding of what it means in the context of the model.

 

Please note that these are only a few examples, and there may be other instances in the full manuscript that could benefit from improvement. 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop