Next Article in Journal
Research on Multi-Objective Low-Carbon Flexible Job Shop Scheduling Based on Improved NSGA-II
Previous Article in Journal
Influence of Stator/Rotor Torque Ratio on Torque Performance in External-Rotor Dual-Armature Flux-Switching PM Machines
 
 
Article
Peer-Review Record

Structural Health Monitoring of Laminated Composites Using Lightweight Transfer Learning

Machines 2024, 12(9), 589; https://doi.org/10.3390/machines12090589
by Muhammad Muzammil Azad, Izaz Raouf, Muhammad Sohail and Heung Soo Kim *
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Machines 2024, 12(9), 589; https://doi.org/10.3390/machines12090589
Submission received: 23 July 2024 / Revised: 22 August 2024 / Accepted: 23 August 2024 / Published: 25 August 2024

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The article submitted by Kim et al., proposes a lightweight transfer learning approach for structural health monitoring of laminated composites.

Overall, the obtained results are promising, and the research objective has been achieved. However, it is recommended that the authors address the following changes before publication.

1.     Data conversion: Since the original signal has been modified into the time-frequency scalogram images, those images can have the potential to be cropped, rotated, or resized to fulfill the limited dataset. What is the validation result if the augmented dataset was input into the conventional deep-learning model like CNN, LSTM, and so on? If the results were also good, was it still necessary to load a transfer-learning model?

2.     Data conversion: What is the size and count of images in each class? It is important to know how to reproduce the data as sufficient data is required to even train a transfer learning model.

3.     There isn't sufficient evidence to demonstrate that the proposed models are lightweight. It is recommended to include a table displaying the number of parameters or the size of the transfer learning models to substantiate the claim of their lightweight nature.

 

4.     Limited Data Scenarios: While the paper addresses the limited data issue in SHM of composite structures, it may be important to acknowledge the potential limitations of the proposed approach in scenarios where data scarcity is a significant challenge. The study could benefit from adding some discussion regarding the potential constraints and limitations of the proposed framework in real-world applications with extremely limited data availability.

Comments on the Quality of English Language

Quality of English is fine. 

Author Response

Response to the Reviewer 1 Comments

Reviewer 1 Feedback: The article submitted by Kim et al., proposes a lightweight transfer learning approach for structural health monitoring of laminated composites. Overall, the obtained results are promising, and the research objective has been achieved. However, it is recommended that the authors address the following changes before publication:.

Response: Thank you very much for taking the time to review the paper. We are pleased to receive your recognition of our article. We carefully considered your suggestions, and here we provide detailed answers to all your concerns. All revisions made in response to your comments have been highlighted in the machines-3145369 - Revised Manuscript  document.

 

Comment 1: Data conversion: Since the original signal has been modified into the time-frequency scalogram images, those images can have the potential to be cropped, rotated, or resized to fulfill the limited dataset. What is the validation result if the augmented dataset was input into the conventional deep-learning model like CNN, LSTM, and so on? If the results were also good, was it still necessary to load a transfer-learning model?

Response: Thank you for your insightful comment. The reviewer's concern regarding the use of data augmentation techniques such as cropping, rotating, or resizing images instead of loading a pre-trained model is valid for computer vision tasks. However, the present research is not purely based on computer vision, but rather utilizes the vibrational data from composite structure. Such data contains the vibrational or dynamic characteristics of the composite structures. In such cases, data augmentation techniques such as cropping, rotating, or resizing images can potentially disturb the dynamic characteristics of the various health states present in their respective signals since the deep learning models perform autonomous feature extraction. Therefore, most of the research on data augmentation for damage detection of composite structures is focused on simulating the damage scenarios instead of using traditional computer vision-based data augmentation techniques. Moreover, simulating the exact behavior of composite is difficult as it requires physical knowledge of the system, and using the simulated data as augmented data to train a deep learning model from scratch is also a tedious process. Thus, the present research overcomes both these challenges by not requiring the augmentation of data through simulation and using the lightweight pre-trained model instead of building a model from scratch. To describe this, the following section has been added to the revised manuscript.

Changes in the manuscript:

Section 1 (3rd paragraph): “It is because these traditional data augmentation techniques can disturb the dynamic characteristics of the signals. Therefore, the commonly used approach for data augmentation is by simulating the exact behavior of the composite [23]. While simulation-based augmentation can yield decent results, it is challenging to replicate the precise behavior of composites since it requires physical understanding of the system. Therefore, training a deep learning model from scratch using the simulated data as augmented data is a tedious procedure. Thus, the goal of the current study is to resolve the issue of data scarcity in the health monitoring of composites using lightweight pre-trained models, without using data augmentation, simulation, and extensive experimentation.”

References:

  1. Khan, A.; Raouf, I.; Noh, Y.R.; Lee, D.; Sohn, J.W.; Kim, H.S. Autonomous Assessment of Delamination in Laminated Composites Using Deep Learning and Data Augmentation. Compos. Struct. 2022, 290, 115502, doi:10.1016/j.compstruct.2022.115502.

 

 

 

Comment 2: Data conversion: What is the size and count of images in each class? It is important to know how to reproduce the data as sufficient data is required to even train a transfer learning model.

Response: Thank you for suggesting the addition of size and count of the number of images in each class. The details for the total number of images in each class and their size have been added to the revised manuscript. The changes made in the revised manuscript are as follows:

Changes in the manuscript:

Section 3.2: “The length of the signal obtained through a single random response comprised 37,500 data points. The same number of data points were obtained for all 5 samples of the same health state. Therefore, the total length of a single response signal for all samples of each health state was concatenated to a single signal of length 187,500 data points. This resulted in 100 images using a window size of 1875, without any overlapping. Ten such random responses were obtained for all 5 samples, resulting in a total of 1000 scalogram images for each health state. Therefore, a total of 3000 scalogram images were generated belonging to three health states H, D1, and D2. The obtained images were then resized to 224×224-pixel size which is equivalent to the input size required by the LTL models.”

 

 

Comment 3: There isn't sufficient evidence to demonstrate that the proposed models are lightweight. It is recommended to include a table displaying the number of parameters or the size of the transfer learning models to substantiate the claim of their lightweight nature.

Response: Thank you for your insightful comment. The reviewer's concern regarding the insufficient evidence to demonstrate the lightweight nature of the transfer learning models is valid. Therefore, a table has been added to the revised manuscript showcasing the lightweight nature of these models in terms of the number of parameters and the model size. The changes made to the manuscript are as follows:

Changes in the manuscript:

Section 2.2: “Thus, lightweight transfer learning refers to transfer learning models with fewer parameters and reduced computational requirements, making them suitable for deployment in resource-constrained environments. Table 1 shows the total number of parameters and the size of some of the commonly used transfer learning models in comparison with the lightweight models.”

Section 2.2 (Table 1):

Table 1. Comparison of the number of parameters and the size of the model for some commonly used transfer learning models in comparison to the LTL models.

Transfer Learning Model

Number of Parameters (Million)

Size of Model (Mbs)

AlexNet

61.10

233.07

DenseNet-121

8.06

30.44

ResNet-50

25.64

97.59

VGG-16

138.36

527.79

InceptionV3

23.85

90.86

NASNetMobile

5.32

20.18

MobileNet

4.25

16.14

EfficientNet

5.33

20.17

 

Comment 4: Limited Data Scenarios: While the paper addresses the limited data issue in SHM of composite structures, it may be important to acknowledge the potential limitations of the proposed approach in scenarios where data scarcity is a significant challenge. The study could benefit from adding some discussion regarding the potential constraints and limitations of the proposed framework in real-world applications with extremely limited data availability.

Response: Thank you for your valuable feedback. We have discussed both the benefits and limitations of using the proposed framework in real-world applications with limited data.

Changes in the manuscript:

Section 4 (4th Paragraph): “In addition, to enhance the dependability of the suggested method for practical use, ten random responses were obtained from every sample of each health condition. This approach enables the model to acquire the structural response from a range of responses, enhancing its adaptability and robustness.”

Conclusion: “Moreover, while the proposed framework effectively addresses the issue of limited data in SHM of composite structures, it is important to acknowledge the potential limitations in scenarios with extreme data scarcity. Future work could also explore advanced techniques such as semi-supervised learning and AI-based synthetic data generation to enhance model robustness. Additionally, the creation of public databases within the SHM of composite structures can further resolve data scarcity issues, ensuring broader applicability and improved performance in real-world applications.”

 

 

 

 

 

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

When discussing "lightweight transfer learning," it is important to clearly define what "lightweight" means in this context.

Does "lightweight" refer to not having to retrain the model from scratch using new data?

Is it because using pre-trained models like NASNetMobile, MobileNet, and EfficientNet leads to better performance? Further investigation is needed to determine if EfficientNet consistently delivers the best results among them.

However, this phenomenon can vary depending on the dataset. It would be beneficial to test the models on another publicly available dataset to see if similar results are achieved.

Author Response

Response to the Reviewer 2 Comments

Thank you very much for taking the time to review the paper. We are pleased to receive your recognition of our article. We carefully considered your suggestions, and here we provide detailed answers to all your concerns. All revisions made in response to your comments have been highlighted in the machines-3145369 Revised Manuscript document.

 

Comment 1: When discussing "lightweight transfer learning," it is important to clearly define what "lightweight" means in this context.

Response: Thank you for your valuable recommendation. In the revised version, we added the definition of the lightweight models. The changes made to the manuscript are as follows:

Changes in the manuscript:

Section 2.2: “Thus, lightweight transfer learning refers to transfer learning models with fewer parameters and reduced computational requirements, making them suitable for deployment in resource-constrained environments.”

 

Comment 2: Does "lightweight" refer to not having to retrain the model from scratch using new data?

Response: Thank you for highlighting this issue. The term lightweight refers to the size and parameters of the pre-trained models that possess the least number of parameters and sizes.  In the revised version, we have added more descriptions explaining what lightweight means. Moreover, a table has been added to the revised manuscript which includes metrics such as the number of parameters and size of the transfer learning models. The lower values of these metrics represent the LTL models. The changes made to the revised manuscript are as follows:

Changes in the manuscript:

Section 2.2: “Thus, lightweight transfer learning refers to transfer learning models with fewer parameters and reduced computational requirements, making them suitable for deployment in resource-constrained environments. Table 1 shows the total number of parameters and the size of some of the commonly used transfer learning models in comparison with the lightweight models.”

Section 2.2 (Table 1):

Table 1. Comparison of the number of parameters and the size of the model for some commonly used transfer learning models in comparison to the LTL models.

Transfer Learning Model

Number of Parameters (Million)

Size of Model (Mbs)

AlexNet

61.10

233.07

DenseNet-121

8.06

30.44

ResNet-50

25.64

97.59

VGG-16

138.36

527.79

InceptionV3

23.85

90.86

NASNetMobile

5.32

20.18

MobileNet

4.25

16.14

EfficientNet

5.33

20.17

 

Comment 3: Is it because using pre-trained models like NASNetMobile, MobileNet, and EfficientNet leads to better performance? Further investigation is needed to determine if EfficientNet consistently delivers the best results among them.

Response: Thank you for your valuable suggestions. Generally, to check the performance of any artificial intelligence model, it is evaluated on unseen test data. This test dataset is extracted from the entire dataset before training. Thus, it is not exposed to the model during training or validation. A similar approach has been adopted in this study, where training and validation datasets were exposed to the model during training, while test data was used as unseen data to validate the developed model. Through such an approach it is expected that the EfficientNet model has enough generalization capability as it shows good performance on the test data in terms of numerous evaluation metrics such as accuracy, precision, recall, and f1-score. However, the details related to data splitting were not available in the initial submission. Therefore, it has been added to the revised manuscript for clarity. The following changes have been made to the revised manuscript:

Changes in the manuscript:

Section 3.2 (heading): 3.2. Data conversion and splitting

Section 3.2: “To improve the generalization ability of the LTL model, the data from six responses comprising 1800 images was used for training, data from two responses comprising 600 images was used for validation, while data from the remaining two responses comprising 600 images was used for testing. This approach ensures that the model is trained on a diverse dataset (from multiple random responses and multiple samples), validated to fine-tune hyperparameters, and tested on unseen data to evaluate performance accurately. Splitting the data in this manner aims to enhance the model's robustness and ability to generalize to new, unseen data, which is crucial for reliable performance in real-world applications.”

 

Comment 4: However, this phenomenon can vary depending on the dataset. It would be beneficial to test the models on another publicly available dataset to see if similar results are achieved.

Response: Thank you for your valuable suggestions. However, there is no public dataset available in this domain that can be used to further validate the proposed approach. Commonly, the performance of the models varies on the new dataset. To tackle this, and reduce the variation in results on the new dataset, the model is validated on an unseen test dataset. The test data used in this study is also a new dataset, but the performance of the model didn’t significantly reduce on the test dataset, rather it reduced by 5.28% from the training accuracy and only 0.33% compared to the validation accuracy for the EfficientNet model. This reduction is minimal compared to the decrease for NASNetMobile and MobileNet models. Moreover, the other evaluation metrics for the EfficientNet model on the test data also demonstrated good performance. These results suggest that our approach generalizes well and maintains robust performance even when tested on new unseen data. However, we appreciate your suggestion and will continue to explore opportunities to test our models on additional datasets as they become available to further strengthen our findings. The changes made to the revised manuscript are as follows:

Changes in the manuscript:

Conclusion: “Moreover, the performance of the EfficientNet model did not significantly reduce on the test dataset, rather it reduced by 5.28% from the training accuracy and only 0.33% compared to the validation accuracy. This reduction is minimal compared to the decrease for NASNetMobile and MobileNet models.”

 

 

 

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Authors present an activity about SHM of CFRP samples by means of lightweight transfer learning method. In general, the paper is well written, detailing the proposed methodology. However, some points should be clarified and explored.

Please, see the attached file.

Comments for author File: Comments.pdf

Author Response

Response to the Reviewer 3 Comments

Thank you very much for taking the time to review the paper. We are pleased to receive your recognition of our article. We carefully considered your suggestions, and here we provide detailed answers to all your concerns. All revisions made in response to your comments have been highlighted in the machines-3145369 - Revised Manuscript (Highlight Changes) document.

 

Comment 1: Section 2.3: It is important for every SHM method to establish its reliability to identify the presence of damages or not. With reference to the test results data, what information is it represented by the false positives, false negatives, true positive and true negative that allow to calculate the reliability indices of the methodology? Example: when a signal data set represents a true positive? Is it when the data permit to identify the damage?

Response: Thank you for highlighting this issue. It was noticed that the meaning of false positives, false negatives, true positives, and true negatives concerning the SHM application was not present. Therefore, an explanation has been added to the revised manuscript which explains the significance of these parameters in identifying the health state of the composites. The changes made to the manuscript are as follows:

Changes in the manuscript:

Section 2.4: where, true positive indicates the correctly identified health state, false positive is the incorrect identification of a health state, false negative is when the SHM method fails to detect a health state that is actually present, and true negative is when the model identifies the other health states correctly. These metrics are obtained from the confusion matrix shown in Figure 3 which provides an overview of both true and predicted outputs. Therefore, both the confusion matrix and its derived evaluation metrics were utilized to estimate the SHM performance of the LTL models.

Figure 3. Confusion matrix showing the distribution of true positives, false negatives, false positives, and true negatives in classification.

 

Comment 2: Section 3: Please, put some more details on the CFRP samples. What are the sample dimensions? What are the damage dimensions and related locations? Please, insert figures (also schematic ones) of samples

Response: Thank you for highlighting this issue. More details about the CFRP samples have been added to the revised manuscript. These include the dimensions of the samples, the dimensions and location of the damage, and the figures showing samples and their schematics for the three health states. The changes made to the revised manuscript are as follows:

Changes in the manuscript:

Section 3.1 (1st paragraph): The laminated composite plates of dimensions 350mm×350mm were obtained through this process as shown in Figure 4(a).

Section 3.1 (1st paragraph): The delaminations were induced in the mid-plane of the composites using a Teflon film. Both delaminations were of identical size (50mm); however, D1 was closer to the clamped side, while D2 was located closer to the free end of the cantilever beam configuration during experimentation. The delaminations were deliberately sized identically to demonstrate the detection of damage at different locations within the composite structure. This approach addresses the challenge of identifying adjacent damages that may possess similar dynamic characteristics. Five different samples of each health state were used in the experiments to account for the manufacturing and experimentation uncertainties as shown in Figure 4(b). The dimensions of each sample were 300mm×50mm, and the use of five such samples for each health state is expected to mitigate potential inconsistencies from the manufacturing process, reducing the likelihood of false data and ensuring the reliability of data acquisition.

 

Figure 4. (a) The composite plate obtained after the hot press compression molding process, and (b) Five samples obtained for each health state where the red highlighted area shows the location of the damage, and the black shaded region on the left shows the clamping of the samples in cantilever beam configuration during the vibration testing. The schematic on the bottom shows the location of the seeded delaminations from a cross-sectional view present at the mid-plane.

 

Comment 3: Section 3: Please, could you explain, how you defined the size of the introduced debonding areas? Do they represent critical sizes?

Response: Thank you for highlighting this issue. The sizes of the introduced debonding areas were not selected based on critical sizes. Instead, they were deliberately chosen to be the same to demonstrate the identification of damage at different locations within the laminated composites. The selection was guided by literature, which highlights the challenge of detecting damages located adjacent to each other. Our approach highlights the difficulty of identifying similar-sized delaminations at varying positions, showcasing the effectiveness of lightweight transfer learning models in accurately detecting such damages. The following changes have been made to the revised manuscript:

Changes in the manuscript:

Section 3.1 (1st paragraph): The delaminations were deliberately sized identically to demonstrate the detection of damage at different locations within the composite structure. This approach addresses the challenge of identifying adjacent damages that may possess similar dynamic characteristics.

 

Comment 4: Section 3: Before performing the tests, did you do an NDI inspection to verify the health status of the samples? This can be important to understand which is the state of the samples before performing the tests to avoid, for example, any false data due to manufacturing processes.

Response: Thank you for your observation. No Non-Destructive Inspection (NDI) was performed before testing. However, to address potential experimental or manufacturing uncertainties, we prepared five samples for each test condition. This approach ensured that any variability or unforeseen issues arising from the manufacturing process did not affect the reliability of the results. The changes made to the revised manuscript are as follows:

Changes in the manuscript:

Section 3.1 (1st paragraph): Five different samples of each health state were used in the experiments to account for the manufacturing and experimentation uncertainties as shown in Figure 4(b). The dimensions of each sample were 300mm×50mm, and the use of five such samples for each health state is expected to mitigate potential inconsistencies from the manufacturing process, reducing the likelihood of false data and ensuring the reliability of data acquisition.

 

Figure 4. (a) The composite plate obtained after the hot press compression molding process, and (b) Five samples obtained for each health state where the red highlighted area shows the location of the damage, and the black shaded region on the left shows the clamping of the samples in cantilever beam configuration during the vibration testing. The schematic on the bottom shows the location of the seeded delaminations from a cross-sectional view present at the mid-plane.

 

 

Comment 5: Section 3.2: figure 4 shows the transformation of raw vibrational data into scalogram images. Is it not clear how the windows on the upper side of the figure can represent the health status of the samples?

Response: Thank you for highlighting this mistake. Figure 4 is updated to Figure 6 in the revised manuscript. The original figure was made to showcase how windows were defined while converting the time domain signals to scalograms using the CWT analysis. The terms D1 and D2 were an oversight in the figure representation, and we have corrected this in Figure 6 of the revised manuscript. The changes made to the revised manuscript are as follows:

Changes in the manuscript:

Section 3.2: Figure 6(a) shows the windows for the healthy signals and their scalogram images.

Section 3.2: A similar process was repeated for the two damage cases D1 and D2. The example scalogram images for D1 and D2 are shown in Figure 6(b).

 

Section 3.2 (Figure 6):

Figure 6. (a) The transformation of raw vibrational data into scalogram images using CWT analysis for the healthy laminated composites, and (b) an example of scalogram images for damage states D1 and D2.

 

Comment 6: Section 3.2: regarding figure 4, please detail how and why the figures in lower side represent the health status of the samples. For example, what do the color scales represent in the figures? Are they indications of the presence of damages? Could the colored parts be representative of structural defects, for example due to manufacturing defects, and not damage?

Response: Thank you for your insightful comment. Figure 4 has been updated to Figure 6 in the revised manuscript. The scalogram images in Figure 6 represent the time-frequency distribution of the signals collected from the samples in different health states (H, D1, and D2). The color scales in these images correspond to the energy intensity of the signal at specific time-frequency regions, with warmer colors (red and yellow) indicating higher energy levels and cooler colors (blue) indicating lower energy levels. These energy distributions are used to identify changes in the structural health of the composite. The presence of damage is typically associated with distinct energy patterns in the scalograms, such as localized high-energy regions that differ from those of the healthy state. While the colored regions might theoretically represent manufacturing defects, the controlled experimental setup, including the use of multiple samples and the consistent appearance of these patterns across tests, suggests that these variations are majorly due to the introduced delaminations rather than manufacturing defects or experimental uncertainties. The changes made to the revised manuscript are as follows:

Changes in the manuscript:

Section 3.2: These scalogram images represent the time-frequency distribution of signals, with color scales indicating energy intensity—warmer colors (red and yellow) signify higher energy levels and cooler colors (blue) represent lower energy levels. These patterns help identify structural health, where distinct high-energy regions correlate with the presence of damage. Given the controlled setup and consistent patterns, the variations are attributed more to introduced delaminations rather than potential manufacturing defects or experimental uncertainties.

Figure 6. (a) The transformation of raw vibrational data into scalogram images using CWT analysis for the healthy laminated composites, and (b) an example of scalogram images for damage states D1 and D2.

 

Comment 7: Section 4: Although accuracy and precision have been defined in section 2.2.1, it would be important to know whether this method also allows the location and size of damage to be identified. Please, report what parameter has been taken into account to determine the accuracy of the methods.

Response: Thank you for your insightful comment. This study is focused on damage detection in laminated composites using lightweight transfer learning models, rather than damage localization and size determination. Damage localization and size determination are regression problems where the last layers of AI models predict a specific value. However, this study utilizes classification models to predict the health states of the composites. Moreover, damage localization and size determination require data at multiple locations with different damage sizes to make the final predictions, which is not possible using the current data as it is obtained for the same size of damage at two locations only. Therefore, it has been mentioned in the conclusion section as future work.

The details of the parameters to determine the accuracy have been added to the revised manuscript. The changes made to the revised manuscript are as follows:

Changes in the manuscript:

Section 2.4: where, true positive indicates the correctly identified health state, false positive is the incorrect identification of a health state, false negative is when the SHM method fails to detect a health state that is actually present, and true negative is when the model identifies the other health states correctly. These metrics are obtained from the confusion matrix shown in Figure 3 which provides an overview of both true and predicted outputs. Therefore, both the confusion matrix and its derived evaluation metrics were utilized to estimate the SHM performance of the LTL models.

 

Figure 3. Confusion matrix showing the distribution of true positives, false negatives, false positives, and true negatives in classification.

Conclusion: The current study focused on delaminations of the same size at different locations; future research could expand to include delaminations of varying sizes and locations within laminated composite structures.

 

Comment 8: Section 4- Line 325: It is reported: “Due to the nature of the same size of damages present in the composite laminates, there still exists confusion in identifying D1 and D2, but the misclassified instances for the damage states are less, compared to the NASNetMobile and MobileNet.” Please, clarify the statement. It seems that the LTL methods are not able to identify correctly the damages due to fact that they have the same size. Is it correct? But an SHM method should be independent of the dimensions of damages taking into account that the damages are on different samples. Even if samples had more than one damage, the method should be able to detect it. This question is linked to question 1 about the accuracy of the method and how the information is considered false or true.

Response: Thank you for highlighting this issue. You are correct that an SHM method should ideally be independent of damage size and capable of accurately detecting damages, even when multiple damages are present. In our study, the LTL models were indeed able to classify all heath states well. However, some confusion occurred between the two damage states (D1 and D2), because they are of the same size and present at adjacent locations. Even being present on different samples, the damages that are present adjacent to each other would contain some similar dynamic characteristics. Therefore, causing the LTL models to be confused. This is also evident from the scalogram images that some high-intensity regions are common among the scalogram of D1 and D2. These details were not present in the original submission confusing, therefore, it has been added to the revised manuscript for better understanding.

Regarding the accuracy of the proposed model, a 10-fold cross-validation strategy has been added to the revised manuscript. This strategy provides a robust assessment of a model's performance by reducing variability, as the model is trained and validated on different subsets of the data. It also ensures that the evaluation is less dependent on any particular data split, leading to more reliable and generalized results.

The changes made to the revised manuscript are as follows:

Changes in the manuscript:

Section 4 (3rd paragraph): This confusion is due to their identical sizes and adjacent locations, which likely result in similar dynamic characteristics. This overlap in the dynamic characteristics is reflected in the scalogram images, where some common high-intensity regions are present for both D1 and D2, contributing to the misclassification between these states.

Section 4 (2nd paragraph): A 10-fold cross-validation strategy validates the generalization ability of the EfficientNet-based model. This approach rigorously tests the performance of the model across different data subsets, minimizing the risk of overfitting and ensuring that the results are not dependent on any specific data split. The results from 10-fold cross-validation are shown in Figure 8. It can be observed that the training and validation accuracies are consistent among all folds. The mean accuracies for 10-fold are shown in the last columns indicating training and validation accuracies of (99.30 ± 0.82) % and (94.67 ± 1.72) %, respectively. The low standard deviation in both training and validation accuracies indicates that the performance of the model is stable and reliable across different data splits. This further confirms the robustness of the EfficientNet-based model for SHM applications.

Figure 8. The training and validation accuracies for each fold of the EfficientNet model based on 10-fold cross-validation.

 

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

If EfficientNet has strong generalization capabilities, it should consistently show low error across various datasets and partitioning methods. Therefore, if you perform experiments multiple times by randomly selecting 1,800 images and using a 10-fold cross-validation method, EfficientNet is likely to still show the best results. If you can demonstrate this with an example, it might be sufficient without using another dataset. Could you conduct about 10 trials to see if it consistently shows low RMS values?

Author Response

Response to the Reviewer 2 Comments

Thank you very much for taking the time to review the paper. We are pleased to receive your recognition of our article. We carefully considered your suggestions, and here we provide detailed answers to all your concerns. All revisions made in response to your comments have been highlighted in the machines-3145369 - Revised Manuscript (Highlight Changes) document.

 

Comment 1: If EfficientNet has strong generalization capabilities, it should consistently show low error across various datasets and partitioning methods. Therefore, if you perform experiments multiple times by randomly selecting 1,800 images and using a 10-fold cross-validation method, EfficientNet is likely to still show the best results. If you can demonstrate this with an example, it might be sufficient without using another dataset. Could you conduct about 10 trials to see if it consistently shows low RMS values?

Response: Thank you for your valuable recommendation. In the revised version, we added 10-fold cross-validation to showcase the generalization capability of the proposed EfficientNet model. The changes made to the manuscript are as follows:

Changes in the manuscript:

Section 4 (2nd paragraph): A 10-fold cross-validation strategy validates the generalization ability of the EfficientNet-based model. This approach rigorously tests the performance of the model across different data subsets, minimizing the risk of overfitting and ensuring that the results are not dependent on any specific data split. The results from 10-fold cross-validation are shown in Figure 8. It can be observed that the training and validation accuracies are consistent among all folds. The mean accuracies for 10-fold are shown in the last columns indicating training and validation accuracies of (99.30 ± 0.82) % and (94.67 ± 1.72) %, respectively. The low standard deviation in both training and validation accuracies indicates that the performance of the model is stable and reliable across different data splits. This further confirms the robustness of the EfficientNet-based model for SHM applications.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

Please, update the manuscript with minor revisions in attached files.

Comments for author File: Comments.pdf

Author Response

Response to the Reviewer 3 Comments

Thank you very much for taking the time to review the paper. We are pleased to receive your recognition of our article. We carefully considered your suggestions, and here we provide detailed answers to all your concerns. All revisions made in response to your comments have been highlighted in the machines-3145369 - Revised Manuscript (Highlight Changes) document.

 

Comment 1: In the first part of the statement a dimension of 350x350mm of laminated plate is reported, while in the second part of the statement a dimension of 300mmx50mm of each sample is reported. If it is well understood, the five samples were obtained from the laminate plate. Is it correct? In that case added that details or correct the dimensions of the samples. Moreover, is it the size of the delamination reported (50mm) related to longitudinal or width dimension? Generally, a delamination has two dimensions. Is it the width of delamination equal to samples one? Please, added details.

Response: Thank you for your suggestion. More details about the size of the samples and size of the delamination have been added to the revised manuscript. The changes made to the revised manuscript are as follows:

Changes in the manuscript:

Section 3.1 (1st paragraph): The edges of the developed composites were rough due to manual layup; therefore, the rough edges were removed. After removing the rough edges, the dimensions of each sample were 300mm×50mm, and five such samples for each health state were obtained to mitigate potential inconsistencies from the manufacturing process, reducing the likelihood of false data and ensuring the reliability of data acquisition.

Section 3.1 (1st paragraph): Both delaminations were of identical size (50mm×50mm); however, D1 was closer to the clamped side, while D2 was located closer to the free end of the cantilever beam configuration during experimentation.

 

Comment 2: The capability of your method to detect damages should be independent from the size of the delamination, since no reference to size of the damage detected is reported in your work and no samples with adjacent damages were tested. Probably the identical size is due to facilitate the manufacturing process of the samples. Please, clarify the statement.

Response: Thank you for highlighting this issue. Yes, the delamination with identical sizes is also easy to produce in laminated composites facilitating the manufacturing process. Therefore, the following changes have been made to the revised manuscript:

Changes in the manuscript:

Section 3.1 (1st paragraph): Moreover, the delaminations with identical size are also easy to produce in laminated composites facilitating the manufacturing process.

 

Comment 3: For future works, the authors are invited to use also standard methods (as NDI) to avoid any false information and to substantiate the validation of the method used. In alternative, could be reported if the vibration signals for the same type of samples can be compared to demonstrate the manufacturing process don’t influence the tests.

Response: Thank you for your observation. Since, the vibration signals were obtained through random excitations, therefore, it is not worthy to compare the random signals belonging to same health state. However, it is better to use some standard NDI techniques to validate the health states of the composites. Therefore, an addition has been made to the possible future work and the changes made to the revised manuscript are as follows:

Changes in the manuscript:

Section 5: Moreover, some standard non-destructive inspection (NDI) could also be utilized to validate the health states of the composites. These pre-tested health states will ensure high quality of the data from each health state and avoid the false information during model development and validation.

 

Author Response File: Author Response.pdf

Back to TopTop