Next Article in Journal
Analysis of the Performance Impact of Fine-Tuned Machine Learning Model for Phishing URL Detection
Previous Article in Journal
Toward Privacy-Preserving Directly Contactable Symptom-Matching Scheme for IoT Devices
 
 
Article
Peer-Review Record

Credit Risk Prediction Model for Listed Companies Based on CNN-LSTM and Attention Mechanism

Electronics 2023, 12(7), 1643; https://doi.org/10.3390/electronics12071643
by Jingyuan Li 1, Caosen Xu 1,*, Bing Feng 1 and Hanyu Zhao 2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 4: Anonymous
Reviewer 5:
Reviewer 6:
Electronics 2023, 12(7), 1643; https://doi.org/10.3390/electronics12071643
Submission received: 15 February 2023 / Revised: 22 March 2023 / Accepted: 23 March 2023 / Published: 30 March 2023
(This article belongs to the Section Artificial Intelligence)

Round 1

Reviewer 1 Report

In this paper, the authors propose a credit risk prediction model for listed companies based on CNN-LSTM and attention mechanisms. The manuscript needs to be better written, and the results need to be clearly presented in their current form. The study is potentially interesting, but the manuscript and study are not ready for publication.

Below are some comments regarding the presentation of the paper.

1. The manuscript should be edited, e.g., there is no space before and after a citation appear, e.g., ",[3].

2. Figure 1. The caption is too short and not explanatory.

3. The sentence after Equation (1), f is explained. However, it does not appear in the eq. (1), and also XL, K, convolution sign *, etc.

4. Similar to 3., the same for equation (2).

5. Figures 2-7. The caption is not explanatory.

Author Response

#Response to Reviewers

 

Reviewer 1:

Comments and Suggestions for Authors

 

Comments:

The manuscript should be edited, e.g., there is no space before and after a citation appear, e.g., ",[3].

 

 

Response:Thank you for providing us with valuable feedback on our paper. We greatly appreciate your positive comments on the topic and the results we presented. We fully understand your concerns regarding the presentation of our paper and would like to express our sincere apologies for any confusion or lack of clarity caused. To address your concerns, we have made significant revisions to the paper. We removed the blanks in reference 3 to make it more formalized.

 

Comments:

Figure 1. The caption is too short and not explanatory.

 

Response:

Thanks for your suggestion, we have revised the title of Figure 1 to read, CNN, LSTM and AM model combined flow chart for better understanding.

 

 

Comments:

The sentence after Equation (1), f is explained. However, it does not appear in the eq. (1), and also XL, K, convolution sign *, etc. Similar to 3., the same for equation (2).

 

 

Response:

 

Thank you for your feedback; I am sorry for the mistakes we made; we re-interpreted formula (1) and formula (2), we found that the explanation behind formula (1) was redundant, and deleted all of them, and the formula behind formula (2) can thoroughly explain the variables in formula (1), to avoid this ambiguity from happening again.

 

Comments:

Figures 2-7. The caption is not explanatory.

 

Response:

 

Thanks for your suggestion; we have modified the captions of Figures 2-7 to be more interpretable; figure 2 is the operation flow chart of the attention mechanism, Figure 3 is the flow chart of the one-dimensional CNN convolutional neural network, Figure 4 is the operation flow chart of the long-term short-term memory network LSTM, and Figure 5 is the three models in the case of different reasoning quantities Figure 6 is a line chart of the inference speed of three other models in the case of complex and different inference quantities, and Figure 7 is in the same data set, the prediction accuracy of three different models in data of varying complexity contrast. In conclusion, we hope that our revised manuscript meets your expectations, and we look forward to receiving your feedback on our updated version.

 

Reviewer 2 Report

The authors proposed CNN-LSTM-AM for credit risk prediction. Although the idea itself quite promising, this paper needs tremendous amount of revision.

 

First of all, please explain all of the abbreviation since the very beginning to avoid confusion for readers.

The list of references are really limited. There are many other studies (not necessary have to be credit risk) that have explanation about ANN, CNN, and related methods as comparison for this study. The current manuscript does not have adequate number of references as the study benchmark.

The methodology explanation is really lacking. The explanation itself is really general and should be extended to fit the case. Furthermore, the relation between each method, CNN, LSTM, and AM have not been explained. How the authors utilized each method? 

The result and analysis should be extended. The comparison should also be made for each part of method, includi ng CNN, LSTM, and AM. The comparison with other basic methods could not justified that the proposed method is outperform other. Maybe hybrid VSM with AM will give better result? If the authors wants to compared the result, a mere comparison with basic method only is not enough.

 

The placement of references is all over the place. 

Author Response

Reviewer 2:

Comments and Suggestions for Authors

 

Comments:

 First of all, please explain all of the abbreviation since the very beginning to avoid confusion for readers.

 

Response:

Thank you for your feedback,We apologize for not adequately explaining the abbreviations, which created ambiguity; we have thoroughly reviewed and compiled the manuscript to ensure all abbreviations are presented. We hope that the revised manuscript will be more understandable and accessible to readers accepted.

 

Comments:

The list of references are really limited. There are many other studies (not necessary have to be credit risk) that have explanation about ANN, CNN, and related methods as comparison for this study. The current manuscript does not have adequate number of references as the study benchmark.

 

Response:

Thank you for your suggestion. This is indeed something we have not considered carefully. In response to your feedback, we have found more literature to support the interpretation of ANN, CNN, and other models. to serve as a research benchmark for manuscripts

 

Comments:

The methodology explanation is really lacking. The explanation itself is general and should be extended to fit the case. Furthermore, the relation between each method, CNN, LSTM, and AM have not been explained. How the authors utilized each method? 

 

Response:

Thanks for your advice. We have added an explanation in the manuscript in response to your concerns. The long-term, short-term memory network (LSTM) is an excellent variant of RNN, which can solve the gradient explosion problem in long-term sequence prediction. It is now commonly used in time series prediction. This article explains the projection of listed companies' credit risk and focuses on their credit indicators. Then it performs risk prediction so that the LSTM model can be used for prediction. In addition, to improve the training time and speed of the LSTM model and reduce the number of parameters, we have introduced a convolutional neural network ( CNN); before the data enters the LSTM model, use CNN for feature extraction, select indicators that are more relevant to the company's credit risk prediction, reduce the complexity of the data, and combine the advantages of the LSTM model and the CNN model to form a CNN-LSTM Model, and finally introduce the attention mechanism, the attention mechanism can learn independently, select more relevant feature vectors, optimize the model, and improve the prediction accuracy of the CNN-LSTM model—finally combined into a CNN-LSTM-AM model.

 

Comments:

The result and analysis should be extended. The comparison should also be made for each part of method, including CNN, LSTM, and AM. The comparison with other basic methods could not justified that the proposed method is outperform other. Maybe hybrid VSM with AM will give better result? If the authors want to compared the result, a mere comparison with basic method only is not enough.

 

Response:

 

Thanks for your advice. I also realize that our comparison method needs to be improved. To this end, we have added new experiments to compare the performance tests of CNN, LSTM, CNN-LSTM, AM and other related models. For this reason, we have added five sets of experiments to compare CNN, LSTM, CNN-LSTM, and AM. The training time of each model and our model in different amounts of data, the inference time in other data, and the comparison of the number of parameters and the part of the calculation, results show that we have introduced CNN dimensionality reduction and AM optimization in the model, based on the LSTM model, these performances are all good. Finally, we also compared the accuracy of different models on the credit risk prediction of listed companies on different data sets. The results show that our model has excellent performance and good generalization.

Comments:

 The placement of references is all over the place. 

 

Response:

Thank you for your feedback. This is indeed our negligence. The order of the references we have adjusted is easier to read. I hope you approve our revisions and look forward to your feedback.

Reviewer 3 Report

The paper presents a new and relevant method for credit risk prediction in listed companies using deep machine learning techniques, specifically a CNN-LSTM model with attention mechanism. The method has shown improved accuracy compared to traditional models and can handle both linear and nonlinear datasets without requiring a large number of historical default records. Although the model is more complex and slower than traditional models, it has a wide application range, fast operation speed, and high AUC value. The comparative study of credit risk prediction models has practical significance for maintaining a stable investment environment in the stock market, providing objective investment advice for investors, and helping commercial banks reduce economic losses.

Author Response

Reviewer 3:

Comments and Suggestions for Authors

 

Comments:

The paper presents a new and relevant method for credit risk prediction in listed companies using deep machine learning techniques, specifically a CNN-LSTM model with attention mechanism. The method has shown improved accuracy compared to traditional models and can handle both linear and nonlinear datasets without requiring many historical default records. Although the model is more complex and slower than traditional models, it has a wide application range, fast operation speed, and high AUC value. The comparative study of credit risk prediction models has practical significance for maintaining a stable investment environment in the stock market, providing objective investment advice for investors, and helping commercial banks reduce economic losses.

 

Response:

Thank you for your Comments,Thank you for your high recognition of our work; we will continue to work to improve this manuscript further

Reviewer 4 Report

The author studies the problem of predicting credit risk of listed companies. It proposed a CNN-LSTM model with attention layer to process the input sequence and regress the final score.

 

Comments on the paper:

1. For introduction of attention, "Neural Machine Translation by Jointly Learning to Align and Translate" and "Attention is all you need" should be mentioned as they are the foundation of attention models.

 

2. The employed attention is not the standard one with QKV, attention score and attention prob. The (1) equation does not match with description paragraph after that

 

3. The paper should justify the rationale of why using a CNN-LSTM-AM model. why not using an attention-only model? Why not CNN only or LSTM only models?

4. There are many papers applying CNN/LSTM/attention to the time series data. What are the differences of the proposed work?

5. The architecture of overall model should be more clearly described with a figure/algorithm block.

 

 

 

Author Response

Reviewer 4:

Comments and Suggestions for Authors

 

Comments: 

For introduction of attention, "Neural Machine Translation by Jointly Learning to Align and Translate" and "Attention is all you need" should be mentioned as they are the foundation of attention models. The employed attention is not the standard one with QKV, attention score and attention prob. The (1) equation does not match with description paragraph after that.

 

Response: 

 

Thank you for your suggestions on our paper; First, we mentioned these two papers in the table and gave citations to make our writing more convincing; I am sorry for our oversight; I used the attention standard QKV, Attention Score, and Attention Probability, and modify the explanation behind Equation (1).

 

 

 

Comments: 

 

The paper should justify the rationale of why using a CNN-LSTM-AM model. why not using an attention-only model? Why not CNN only or LSTM only models?

Response: 

 

Thank you for your suggestion. To solve your concerns, we have added several new experiments to mainly compare the performance of CNN, LSTM, AM, and CNN-LSTM models; these four models and our models, such as the amount of calculation, the number of parameters, the prediction accuracy of different models in different data sets, the comparison of the calculation time of the model and the comparison of the reasoning time of the model, the results show that our model has better performance in these experiments, which thoroughly explains why we choose CNN-LSTM-AM model.

 

Comments: 

There are many papers applying CNN/LSTM/attention to the time series data. What are the differences of the proposed work? The architecture of overall model should be more clearly described with a figure/algorithm block.

 

Response:

 

Thanks for your feedback. First, we added several experiments to compare the differences and performance comparisons between CNN, LSTM and attention when using our model to process time series data, such as calculation amount, parameter amount, accuracy, etc. The comparison results show that the prediction accuracy of the LSTM model is higher than that of the CNN model, and the prediction effect of using the attention mechanism alone is not very satisfactory. We combine the CNN with the LSTM model, and the results show that the performance of the CNN-LSTM model is better than that of the CNN. The three different methods of /LSTM/attention have high prediction accuracy, but their performance could be better in computing speed. Therefore, the attention mechanism is introduced to improve the computing speed of the model by reasonably allocating weights and calculations. Finally, it constitutes our ideal risk prediction model for listed companies. Then, the overall model architecture description may need clarification. To solve this problem, we drew a more detailed general architecture diagram of the model to express it clearly. In the figure, we preprocess the data first and then input the data into a one-dimensional CNN network for weight reduction. In the CNN network, convolution and pooling are performed to reduce the dimensionality of the data, extract feature vectors, pool selection Max pooling, then normalization processing, and finally input the flat layer for dimension reduction, input in the fully connected layer. The data enters the LSTM network for risk prediction and passes through the AM layer to assign weights to features and allocate calculations reasonably Quantity, improve the model's performance, and finally output the predicted results. Hopefully, our work can address these issues, and we look forward to your feedback.

 

Reviewer 5 Report

This is an interesting paper; authors proposed a credit risk prediction model for listed companies based on CNN-LSTM and attention mechanism. The proposed measure has room to be improved before the acceptance of the manuscript. Careful revision of the manuscript is necessary for its publication.

 

 

1.The Abstract section is too short. Please expand and include the instruction of the proposed method, the advantage of the method, and the main results.

 

2.Authors need to add a table of used symbols in the paper to make the paper read easier.

 

3. Introduction should be clearly presented to highlight main ideas and motivation behind the

proposed research. Please include and clearly state research question and motivation of proposed

study in Introduction. The author should be covering the research gap.

 

 

4. the authors should analyze how to set the parameters of the proposed methods in the framework.  Do they have the “optimal” choice?

 

5.  The equation variables must be described in all equations. Also, describe the presence of the equation and its action based on processing the data. Avoid undefined variables in the equation.

 

 

6. It is preferable to choose the baseline algorithm from the new deep learning approaches.

 

7.  Tables captions need to be expanded to make them self-explained.

 

 

8.It will be valuable to provide some analysis or discussion on the computational complexity for the proposed framework.

 

 

9.  The English grammar and punctuation in the article appear to be generally sound, but there may be a few minor errors or inconsistencies that could be corrected through proofreading. It would be beneficial to review the article for any typos or spelling errors, as these can detract from the readability and credibility of the work.

 

10.The following papers on the same topic should be cited and discussed:

1. Review of swarm intelligence-based feature selection methods

2. Reservoir Operation Management with New Multi-Objective (MOEPO) and Metaheuristic (EPO) Algorithms

Author Response

Reviewer 5:

Comments and Suggestions for Authors

 

Comments: 

The Abstract section is too short. Please expand and include the instruction of the proposed method, the advantage of the method, and the main results.

 Response:

 

Thank you for your suggestion; We have made a more detailed demonstration of the abstract, introducing the description of the method, the advantages of the technology and the main results. Our approach is based on the benefits of the LSTM model for long-term time series prediction. On this basis, combined with the CNN model and advantages, integrated into a CNN-LSTM model, reducing the complexity of the data, improving the calculation speed and training speed of the model, etc., and solving the possible lack of historical data in the long-term sequence prediction of the LSTM model, resulting in prediction accuracy To reduce the problem, we introduced the attention mechanism to assign weights independently and optimize the model. The results show that our model has obvious advantages compared with other CNN, LSTM, CNN-LSTM, and other models. The research on the credit risk prediction of the listing formula has a more significant meaning. 

 

Comments: 

Authors need to add a table of used symbols in the paper to make the paper read easier. 

 

 Response:

Thanks for the suggestion; I added a table of symbols used to make the paper easier to read.

 

Comments: 

Introduction should be clearly presented to highlight main ideas and motivation behind the proposed research. Please include and clearly state research question and motivation of proposed study in Introduction. The author should be covering the research gap.

 

Response:

 

Thank you for your suggestion. To solve the problems you raised, we have expanded the introduction part. First, we are based on the importance of the credit risk prediction of listed companies to the financial market to stabilize the financial market and make it healthy and orderly. The development of our proposed research is based on the advantages of the LSTM model in long-term time-series prediction as the basis of the model and then combined with the benefits of the CNN network in feature extraction to reduce the amount of computation and parameters of the LSTM model, and improve the model. Performance. Finally, to solve the problem of missing historical data that may occur in the long-term time series prediction of the LSTM model, the attention mechanism is introduced, the calculation weight is reasonably allocated through independent learning, the model is optimized, and the prediction accuracy and operation of the CNN-LSTM model are improved. Speed. Finally, we propose the CNN-LSTM-AM model to solve the prediction of the credit risk of listed companies.

 

Comments:

the authors should analyze how to set the parameters of the proposed methods in the framework.  Do they have the “optimal” choice? The equation variables must be described in all equations. Also, describe the presence of the equation and its action based on processing the data. Avoid undefined variables in the equation.

Response:

 

Thank you for your suggestion. First, we re-described the variables of the equation. According to your suggestion, we described the existence of the equation and its role based on the processed data. Avoid using undefined variables in equations.

 

 Comments:

 

It is preferable to choose the baseline algorithm from the new deep learning approaches. Tables captions need to be expanded to make them self-explained.

 

Response:

Thanks for the suggestion, we're sorry for the ambiguity caused by our table titles, to fix this we've reworded the table titles to make them easier to understand. In addition, we re-select the baseline algorithm in deep learning methods such as CNN models to make the algorithm more perfect.

CNN (Convolutional Neural Network) is a deep learning model mainly used in image recognition, speech recognition, natural language processing and other fields. Its calculation process is as follows:

 

  1. Input layer: Input raw data into the network, usually a picture or a speech signal.

 

  1. Convolution layer: perform convolution operation on the input data to extract feature information. The convolution operation can be regarded as a sliding window. The data in the window is multiplied by the convolution kernel to obtain an output value, and then the window is moved to the right by one step. The convolution operation is continued until the entire input data is covered...

 

  1. Activation layer: Nonlinear transformation is performed on the convolutional layer's output to increase the network's nonlinear capability. Commonly used activation functions include ReLU, sigmoid, tanh, etc.

 

  1. Pooling layer: down sampling the output of the convolutional layer to reduce the amount of data while retaining important feature information. Commonly used pooling methods include maximum pooling and average pooling.

 

  1. Fully connected layer: Expand the output of the pooling layer into a one-dimensional vector, and then perform matrix multiplication with the weight matrix to obtain a new one-dimensional vector. Fully connected layers are usually used in classification tasks to map feature vectors to different categories.

 

  1. Output layer: Select different output layers according to task requirements. For classification tasks, the SoftMax layer is usually used to convert the output of the fully connected layer into a probability distribution.

 

  1. Loss function: Calculate the gap between the predicted value of the model and the actual value, usually using the cross-entropy loss function.

 

  1. Backpropagation: According to the gradient of the loss function, the error is backpropagated, and the network parameters are updated to minimize the loss function.

 

  1. Repeat the above steps until the model converges or reaches the preset number of training rounds.

We hope that our improvements can meet your suggestions, and we look forward to your feedback.

 

Comments:

It will be valuable to provide some analysis or discussion on the computational complexity for the proposed framework. The English grammar and punctuation in the article appear to be generally sound, but there may be a few minor errors or inconsistencies that could be corrected through proofreading. It would be beneficial to review the article for any typos or spelling errors, as these can detract from the readability and credibility of the work.

Response:

 

Thank you for your suggestion; your suggestion has tremendous significance for our research; I made a new analysis and discussion on the computational complexity of the proposed framework; in response to your request, we re-examined the article and revised There were English grammar and punctuation errors, which made our manuscript more readable,

 

Comments:

The following papers on the same topic should be cited and discussed:

  1. Review of swarm intelligence-based feature selection methods
  2. Reservoir Operation Management with New Multi-Objective (MOEPO) and Metaheuristic (EPO) Algorithms

Response:

Thank you for your suggestion; we cited and discussed the paper you suggested on the same topic.

Reviewer 6 Report

The paper has proposed a credit risk prediction model for listed companies  using CNN-LSTM model. The proposed model can be predicted for the credit risk of  financial institutions to assess their default risk.  The current form of this paper should be rewritten to classify logically.  Some questions should be addressed significantly as follows:

1) What is a technical contribution in this study?  Novel ideas and contributions should be added to the paper.

2) The paper has not been shown system architecture. The follow chart has not been given in details, so the proposed system architecture should be given and explained in details of the paper.

(3) Explain data sources used in the experiments.

(4) Give the findings of experiments and explain it in details.  What is your findings of results in experimental results?

(5) references should be updated state of arts.

(6) Tables and figures should be improved significantly.

All issues as mentioned above should be significantly added in the paper.

 

 

Author Response

Reviewer 6:

Comments and Suggestions for Authors

 

Comments: 

What is a technical contribution in this study?  Novel ideas and contributions should be added to the paper.

 

Response:

Thank you for your suggestion. Your suggestion is of great help to our paper. The technical contribution of this paper is mainly based on the LSTM model, combined with the advantages of the CNN network, integrated into a CNN-LSTM model, which reduces the computational load of the LSTM model. and training time, using a one-dimensional CNN network to reduce the weight of historical data can improve the performance of the model. The attention mechanism is introduced to solve the problem of missing historical data in the long-term time series prediction of the LSTM model, which can improve the model. The prediction accuracy is high. The attention mechanism can be used to learn independently, allocate weights reasonably, and put more calculations on the parts that are more related to the credit risk of listed companies to improve the model's performance. Therefore, it finally combined into a CNN-LSTM-AM model.

 

Comments:

The paper has not been shown system architecture. The follow chart has not been given in details, so the proposed system architecture should be given and explained in details of the paper.

 Response: 

 

Thank you for your suggestion. We redraw a more detailed system architecture diagram. In the chart, we preprocess the data and then input the data into a one-dimensional CNN network for weight reduction. In the CNN network, convolution is performed. And pooling, data dimensionality reduction, feature vector extraction, pooling selection of maximum pooling, and then normalization processing, and finally input flat layer dimensionality reduction, input in the fully connected layer, and then the data enters the LSTM network for risk analysis Prediction, and eventually, through the AM layer, the weight distribution of features is carried out, the amount of calculation is allocated reasonably, the performance of the model is improved, and the prediction result is finally output.

 

Comments: 

Explain data sources used in the experiments.

 

 Response: 

Thank you for your suggestion, in order to solve your problem, we added an explanation of the source of the data used. The stock exchange of China is known as the Shanghai Stock Exchange (SSE) and the Shenzhen Stock Exchange (SZSE). The SSE was established in 1990 and is located in Shanghai, while the SZSE was established in 1991 and is located in Shenzhen. Both exchanges are regulated by the China Securities Regulatory Commission (CSRC) and are open to domestic and foreign investors. The SSE and SZSE are the two largest stock exchanges in China and are important components of the country's financial system. The SSE is known for its blue-chip stocks, while the SZSE is known for its technology and growth-oriented companies.

The KMV default database is a database that contains information on the probability of default (PD) for various companies. It was developed by KMV Corporation, a financial technology company that was acquired by Moody's Analytics in 2002. The database uses a statistical model to estimate the likelihood of a company defaulting on its debt obligations within a given time frame. The model considers various factors such as financial ratios, market data, and macroeconomic indicators to calculate the PD. The KMV default database is used by banks, financial institutions, and investors to assess the credit risk of their portfolios and make informed investment decisions.

 

 

Comments: 

Give the findings of experiments and explain it in details.  What is your findings of results in experimental results?

Response: 

 

Thanks for your suggestion; we have added detailed explanations and findings of new experimental results, such as we compared CNN, LSTM, AM, and CNN-LSTM models, the performance of these four models and our model, such as the amount of computation; the number of parameters, the prediction accuracy of different models in different data sets, the comparison of the calculation time of the model and the comparison of the reasoning time of the model, the results show that our model has a good performance in these experiments, to explain Why to choose CNN-LSTM-AM model entirely. To further improve our paper. We hope our edits meet your suggestions and look forward to your feedback.

 

Round 2

Reviewer 1 Report

The authors followed my comments and suggestions.

Author Response

Thank you so much for approving this article. Thank you for your valuable advice and guidance

Reviewer 2 Report

Can be accept as it is.

Author Response

Thank you so much for approving this article. Thank you for your valuable advice and guidance

Reviewer 5 Report

In the revised version, authors improved experimental results and presentation of main contributions.  I have no other comments in the next round. This paper can be accepted in this round.

Author Response

Thank you so much for approving this article. Thank you for your valuable advice and guidance

Reviewer 6 Report

The paper has been addressed the problems requested from the reviewer's comments. However, the paper should be checked it carefully in English and Uppercase, lower cases in Figure notes. It is possible to published after corrected the paper.

Author Response

Reviewer6:

Comments and Suggestions for Authors

 

Comments:

The paper has been addressed the problems requested from the reviewer's comments. However, the paper should be checked it carefully in English and Uppercase, lower cases in Figure notes. It is possible to published after corrected the paper.

 

Response:

Thank you for providing us with valuable feedback on our paper. We greatly appreciate your positive comments on the topic and the results we presented. We fully understand your concerns regarding the presentation of our paper and would like to express our sincere apologies for any confusion or lack of clarity caused.In response to your question, we have thoroughly reviewed our manuscript for grammatical errors, particularly in the picture captions.We explain the KVM model in the article and change it to the Kernel-based Virtual Machine (KVM).Change ANN to Artificial Neural Network (ANN),and some association issues that appear in the image caption.Once again, we would like to express our sincere gratitude for your feedback, and we are committed to making the necessary revisions to ensure that our paper is of the highest quality. We hope that our revised manuscript will meet your expectations, and we look forward to receiving your feedback on our updated version.

 

Back to TopTop