Next Article in Journal
Integrated 1D, 2D, and 3D CNNs Enable Robust and Efficient Land Cover Classification from Hyperspectral Imagery
Next Article in Special Issue
AL-MRIS: An Active Learning-Based Multipath Residual Involution Siamese Network for Few-Shot Hyperspectral Image Classification
Previous Article in Journal
Across-Track and Multi-Aperture InSAR for 3-D Glacier Velocity Estimation of the Siachen Glacier
Previous Article in Special Issue
A Multipath and Multiscale Siamese Network Based on Spatial-Spectral Features for Few-Shot Hyperspectral Image Classification
 
 
Article
Peer-Review Record

Hybrid Convolutional Network Combining Multiscale 3D Depthwise Separable Convolution and CBAM Residual Dilated Convolution for Hyperspectral Image Classification

Remote Sens. 2023, 15(19), 4796; https://doi.org/10.3390/rs15194796
by Yicheng Hu 1, Shufang Tian 1,* and Jia Ge 2
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Remote Sens. 2023, 15(19), 4796; https://doi.org/10.3390/rs15194796
Submission received: 31 August 2023 / Revised: 29 September 2023 / Accepted: 29 September 2023 / Published: 1 October 2023

Round 1

Reviewer 1 Report

Hybrid Convolutional Network Combining Multiscale 3D Depthwise Separable Convolution and CBAM Residual Dilated  Convolution for Hyperspectral Image Classification

 

·       The problem is not properly discussed in the abstract, the dataset and results are also not properly discussed.

·       Please include a graphical abstract in the introduction section.

·       Put some introductory images of hyperspectral images.

·       Point 4 in the contribution list (robust classification performance) needs a separate section to justify the claim. Please explain how the system is robust. Similarly, the Convolutional Block Attention Module needs more details since it is listed in the contribution but I cannot see its descriptions much. Also, multi-scale convolutional fusion is not discussed in the paper.

·       It is still not clear why this work is conducted. What is the problem statement? Also, no related work section to know the research gap.

·       The dataset section should come before the methodology.

·       What are the preprocessing techniques applied??           

·       What are the performance evaluation metrics??

·       What is the model architecture??

·       What are the hyperparameters??

·       The results should be justified with some metrics.

·       Comparative analysis with a similar dataset should be shown in a separate section. How this work is superior to other works?

·       Please provide a discussion section.

·       What are the future directions and Limitations of this study

·       Try to cite more references from 2022 and 2023.

 

 

 

 

No Comments

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The manuscript is complete, and the authors try to prove the progressiveness of the algorithm through experiments. However, there are some problems that need to be revised. The comments are as follows

1. The motivations or remaining challenges are not so clear or what kinds of issues or difficulties are this task that is facing. Please give more details and discussion about the key problems solved in this paper, which is largely different from existing works. In addition, I proposed to reduce the innovation points of the article to three.

2. How about the adaptability of the algorithm to different number of training labels, especially small labels. Please compare with the SOAT methods.

3.These examples may be helpful for the authors to revise the manuscript. Multi-scale Receptive Fields: Graph Attention Neural Network, MultiReceptive Field: An Adaptive Path Aggregation Graph Neural Framework, Multi-feature Fusion: Graph Neural Network and CNN Combining, Unsupervised Self-correlated Learning Smoothy Enhanced Locality Preserving Graph Convolution Embedding Clustering, AF2GNN: Graph Convolution with Adaptive Filters and Aggregators.

4. How about the computational complexity?

5.How are the hyperparameters set in the manuscript? Please demonstrate the setting process through experiments.

6. Some future directions should be pointed out in the conclusion.

7. Please provide the code of the paper to demonstrate the feasibility of the proposed method.

8. These examples may be helpful for the authors to revise the manuscript. Semi-Supervised Locality Preserving Dense Graph Neural Network With ARMA Filters and Context-Aware Learning.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

The main question addressed by the research is: "How can hyperspectral image classification be improved, considering challenges such as computational intensity of 3D convolutions and imbalanced data distribution, through the development of an integrated neural network model called MDRDNet that combines Multiscale 3D Depthwise Separable Convolutional Network and a CBAM-augmented Residual Dilated Convolutional Network?"

The topic of the research is both original and relevant in the field of hyperspectral image classification.

The research introduces a novel neural network architecture called MDRDNet that combines several innovative components, including Multiscale 3D Depthwise Separable Convolutional Network and the integration of the Convolutional Block Attention Module (CBAM) with dilated convolutions via residual connections. This combination is unique and not commonly found in existing hyperspectral image classification models. Therefore, in terms of architecture and methodology, the research presents original contributions to the field.

Hyperspectral image classification is a crucial area of research with applications in agriculture, environmental monitoring, remote sensing, and more. 

The research tackles the challenge of computational intensity associated with 3D convolutions, which is a significant concern when working with hyperspectral data due to its high dimensionality. 

This paper introduces an innovative neural network architecture named MDRDNet, which combines Multiscale 3D Depthwise Separable Convolution and CBAM Residual Dilated Convolution. 

The paper addresses the computational intensity of 3D convolutions, a challenge often faced when dealing with hyperspectral data. By introducing depthwise separable convolutions in a 3D setting, the research substantially reduces the computational burden while maintaining the ability to capture spatial-spectral characteristics. 

The integration of the Convolutional Block Attention Module (CBAM) with dilated convolutions via residual connections is another novel aspect of the paper. 

The paper claims that MDRDNet consistently outperforms existing advanced methodologies, although it would be beneficial to provide specific performance metrics and comparisons to quantify these improvements. Nevertheless, this claim indicates that the proposed architecture has the potential to set a new benchmark for hyperspectral image classification accuracy.

The methodology presented in the paper is promising, but there are some specific improvements and further controls that the authors should consider to enhance the rigor and comprehensibility of their research:

A more comprehensive discussion of the hyperparameter tuning process would be beneficial. Explain how hyperparameters like learning rates, batch sizes, and the number of layers were selected and optimized. Providing insights into the impact of these choices on model performance would enhance the methodology's transparency.

 Describe the preprocessing steps applied to the hyperspectral data, such as normalization, noise reduction, or feature extraction techniques. These steps can significantly impact classification results, so detailing them is essential for replicability.

It's common practice to include baseline models for comparison. Including well-established hyperspectral classification algorithms or CNN architectures as baseline models would help demonstrate the improvement achieved by MDRDNet more convincingly.

To ensure the robustness of the results and the avoidance of overfitting, consider using cross-validation techniques. This would involve splitting the data into multiple training and testing sets and reporting average performance metrics across these splits.

Since the paper mentions imbalanced data distribution as a challenge, explain how the imbalanced data problem was addressed in the experiments. Detail any oversampling, undersampling, or class weighting techniques used to mitigate this issue.

Visualizations, such as activation maps or feature maps, can help readers understand how the proposed architecture processes hyperspectral data. Including visual aids can enhance the clarity of the methodology section.

Provide information about the computational resources used for training and testing, such as the type of hardware (e.g., GPU) and the training time. This information can help readers assess the practicality of implementing the proposed model.

Discuss the potential generalizability of the model to other hyperspectral datasets or applications. Consider testing the model on datasets with varying spectral and spatial characteristics to assess its versatility.

Perform a sensitivity analysis to explore how changes in critical hyperparameters or architectural choices affect model performance. This analysis can help identify the model's robustness and limitations.

By addressing these aspects in the methodology section, the authors can provide a more comprehensive and transparent account of their research, facilitating a better understanding of the strengths and limitations of MDRDNet for hyperspectral image classification.

 

The conclusion mentions that experiments were conducted to dissect the contributions of the 2D and 3D convolutional components, indicating that both components significantly elevate the model's classification accuracy. However, the paper could benefit from providing more detailed results and insights from these experiments. This would help readers better understand the impact of each component on the model's performance.

While the conclusion provides a clear overview of the contributions and presents positive findings regarding MDRDNet's competitiveness, it could be strengthened by including more specific performance metrics and a deeper analysis of the contributions of individual components. Nonetheless, the conclusions align with the main question posed in terms of improving hyperspectral image classification through the proposed architecture.

 

I suggest to add more recent papers in the field: 

Yang, Z.; Zheng, N.; Wang, F. DSSFN: A Dual-Stream Self-Attention Fusion Network for Effective Hyperspectral Image Classification. Remote Sens. 2023, 15, 3701. https://doi.org/10.3390/rs15153701

Todorov, V.; Dimov, I. Unveiling the Power of Stochastic Methods: Advancements in Air Pollution Sensitivity Analysis of the Digital Twin. Atmosphere 2023, 14, 1078. https://doi.org/10.3390/atmos14071078

Yang, H.; Yang, M.; He, B.; Qin, T.; Yang, J. Multiscale Hybrid Convolutional Deep Neural Networks with Channel Attention. Entropy 2022, 24, 1180. https://doi.org/10.3390/e24091180

Dimov, I. et al. S. A Super-Convergent Stochastic Method Based on the Sobol Sequence for Multidimensional Sensitivity Analysis in Environmental Protection. Axioms 2023, 12, 146. https://doi.org/10.3390/axioms12020146

Moderate English checking is required. For example on all places in the manuscript there are missing spaces between the text and the citing reference in brackets, this must be fixed. Also there are missing commas and dots after the formulas.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

The authors have revised the paper based on the comments hence the paper can be accepted.

Author Response

We sincerely thank the reviewer for careful reading and thank you again for your positive comments.

Reviewer 2 Report

The authors did not solve my comments well, so I suggest the authors to reconsider them .

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

Thank you to the authors for addressing all my remarks and suggestions. I feel now the manuscript is hugely improved and only minor corrections are required.

Although the exploration of the optimal hyperparameter values described I still recommend to add a paragraph for possible sensitivity analysis to explore how different changes affect model performance and to add some references regarding the sensitivity analysis. Missing commas or dots after each formula in the manuscript should be fixed. Also a list or table with all of the abbreviations used should be added.

Some sentences are too long especially in the Introduction. And I already mentioned for missing punctuation commas and/or dots after all the formulas.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop