A Review of Space Target Recognition Based on Ensemble Learning
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe paper makes an overview of the most significant works on Ensemble Learning, with a particular focus on Space Situational Awareness application.
The paper is clear, well written and it makes a thorough review of Ensemble Learning for space target recognition.
English is satisfactory.
There is only one typo/text mistake: a repetition of the sentences at lines 51 and 64. The two sentences should be harmonized.
Author Response
Comments 1: A repetition of the sentences at lines 51 and 64. The two sentences should be harmonized.
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have harmonized the two sentences. The details can be found on page 2, line 51 of the revised manuscript.
The revised objective now reads: "Former president of the Association for the Advancement of Artificial Intelligence (AAAI), Thomas G. Dietterich, categorized ensemble learning alongside scalable machine learning, reinforcement learning, and probabilistic networks as the four major research directions in machine learning [6]. "
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThis paper explains the core principles of ensemble learning, examines its characteristics and fusion methods, and offers a detailed comparison of three commonly used ensemble learning techniques. It also explores the fundamental attributes of space targets and establishes a hierarchical framework for space target recognition. Additionally, the paper reviews recent advancements in applying ensemble learning to space target recognition, with a focus on three key areas: space target recognition datasets, the integration of traditional machine learning models, and ensemble deep learning. Furthermore, classical machine learning and ensemble learning algorithms are evaluated on a custom-built space target simulation dataset, revealing that Stacking performs well.
1- There are some repetitions in the text. For example;
"Former president of the Association for the Advancement of Artificial Intelligence (AAAI), Thomas G. Dietterich, categorized ensemble learning alongside scalable machine learning, reinforcement learning, and probabilistic networks as the four major research directions in machine learning"
and
"According to Thomas G. Dietterich, former president of the American Association for Artificial Intelligence, ensemble learning, scalable machine learning, reinforcement learning, and probabilistic networks are recognized as the four primary research directions in machine learning"
2- Regarding the reference list, there are many other reputable and important papers to be cited, especially about the space debris removal topic. I would suggest that the authors should review the state-of-the-art again add the related papers.
3- If there is no conflict of interest, and founding sources would allow, I would suggest the authors to make at least some part of the code/or/generated data-set open-source, so that researchers working in the same field can utilize the outputs of this paper in a better manner and the authors can contribute to the dissipation of science. If this is not possible, you can ignore this comment.
4- What could be the restrictions of the proposed method for space-domain. For exampe, could this method easily be integrated to any on-board computer of satellites ? What could be the potential problems, energy consumption ? data-stream ? communication ? etc.
5- What could be the proposed method performance if the image contains several collided satellite parts/satellites. Imagine a scenario in which there are multiple floating objects which intersect with each other. How the proposed method would perform ? Could the algorithm perceive several floating objects as if they were a single entitiy ? or vice versa ?
Author Response
Comments 1: There are some repetitions in the text.
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have harmonized the two sentences. The details can be found on page 2, line 51 of the revised manuscript.
Comments 2: Regarding the reference list, there are many other reputable and important papers to be cited, especially about the space debris removal topic. I would suggest that the authors should review the state-of-the-art again add the related papers.
Response 2: Thank you for pointing this out. We agree with this comment. Therefore, we have added five papers published in the last three years on the topic of space debris removal. Details can be found on page 17, line 537, and in Table 5 of the revised manuscript.
Comments 3: If there is no conflict of interest, and founding sources would allow, I would suggest the authors to make at least some part of the code/or/generated data-set open-source, so that researchers working in the same field can utilize the outputs of this paper in a better manner and the authors can contribute to the dissipation of science. If this is not possible, you can ignore this comment.
Response 3: Thank you for pointing this out. Regrettably, because of the specific characteristics of the data, it is not convenient for us to open-source the generated data. We kindly ask for your understanding on this matter.
Comments 4: What could be the restrictions of the proposed method for space-domain. For exampe, could this method easily be integrated to any on-board computer of satellites ? What could be the potential problems, energy consumption ? data-stream ? communication ? etc.
Response 4: Thank you for pointing this out. We agree with this comment. Therefore, we have explained the computational resource and efficiency issues faced by ensemble learning methods in Section 6.1 and provide possible solutions. The details can be found on page 26, line 787 of the revised manuscript.
Comments 5: What could be the proposed method performance if the image contains several collided satellite parts/satellites. Imagine a scenario in which there are multiple floating objects which intersect with each other. How the proposed method would perform ? Could the algorithm perceive several floating objects as if they were a single entitiy ? or vice versa ?
Response 5: Thank you for pointing this out. We agree with this comment. Therefore, we have discussed the multi-target recognition problem faced in the spatial target recognition problem and give possible speculations and solutions in Section 6.1. However, due to the limitations of the acquired dataset, it is not possible to conduct multi-scene target detection experiments, so we only give an outlook. The details can be found on page 25, line 772 of the revised manuscrip
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThis paper offers a solid synthesis of ensemble learning methods tailored to the challenging domain of space target recognition. The techniques are clearly described, and the application is niche/novel.
The work is organized logically, moving from theory to application with an effective use of figures and tables to illustrate model performance and parameter sensitivity. The experimental design appears robust, and the comprehensive evaluation, including ROC curves and confusion matrices, substantiates the claims regarding model efficacy. However, the heavy reliance on simulated data could restrict the generalizability of the findings; a discussion on real-world data validation would further strengthen the study. More on this in the revisions requested below.
Scientifically, the paper is sound, with a thorough methodological approach that is both transparent and replicable. The literature review is extensive and appropriately cites important works in ensemble learning, as well as recent advances. Referencing is comprehensive.
The manuscript is generally clear and technically precise. While it is densely packed with information, the English is readable and professional, albeit with occasional minor issues that could be polished for smoother flow.
The paper makes a meaningful contribution by effectively summarizing and applying ensemble learning methods to space target recognition - a problem with unique challenges. The work could benefit from a deeper exploration of potential limitations related to the use of synthetic datasets and suggestions for future work in real-world scenarios.
What follows is a list of issues with the manuscript that should be considered in a revised submission:
1) Over-reliance on synthetic data (Sections 4.1 and 5.1) - The experiments use only synthetic satellite simulation images, which might not capture the variability of real-world space data. Try to incorporate experiments on real-world datasets or include a detailed discussion on the limitations of synthetic data and its impact on generalizability.
2) Insufficient discussion of limitations (Section 5.3 and Conclusion) - There’s little discussion on potential drawbacks, such as model overfitting to simulated data or challenges in deploying these methods in practical scenarios. Add a dedicated subsection that critically examines the limitations of the current approach and outlines future work to address these concerns.
3) Dense presentation in the experimental section (Section 5.3) - The experimental results are detailed but somewhat overwhelming, with heavy reliance on multiple figures and tables that may obscure the main findings. Streamline the presentation by summarizing key results in a concise manner (and perhaps move some of the extensive details to an appendix or supplementary material?)
4) Redundant and inconsistent descriptions of Ensemble metjhods (Section 2.3) - The explanations of bagging, boosting, and stacking are repetitive, and the discussion lacks a unified framework for comparing these methods. Revise Section 2.3 to provide a more integrated overview that highlights the unique strengths and trade-offs of each method without unnecessary repetition.
5) Lack of Clarity in fusion explanation (Section 2.2) - The description of fusion methods is complex and could be hard for readers to follow, particularly due to the dense technical language. Simplify the language and support the discussion with clear diagrams and concrete examples relevant to space target recognition.
Author Response
Comments 1: Over-reliance on synthetic data (Sections 4.1 and 5.1) - The experiments use only synthetic satellite simulation images, which might not capture the variability of real-world space data. Try to incorporate experiments on real-world datasets or include a detailed discussion on the limitations of synthetic data and its impact on generalizability.
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have added a discussion of the limitations of using only synthetic satellite simulation data in Section 5.1. The details can be found on page 19, line 608 of the revised manuscript.
Comments 2: Insufficient discussion of limitations (Section 5.3 and Conclusion) - There’s little discussion on potential drawbacks, such as model overfitting to simulated data or challenges in deploying these methods in practical scenarios. Add a dedicated subsection that critically examines the limitations of the current approach and outlines future work to address these concerns.
Response 2: Thank you for pointing this out. We agree with this comment. Therefore, we have added Section 6.1 to summarize the challenges and solutions faced by ensemble learning in the field of space object recognition. The details can be found on page 25, line 772 of the revised manuscript.
Comments 3: Dense presentation in the experimental section (Section 5.3) - The experimental results are detailed but somewhat overwhelming, with heavy reliance on multiple figures and tables that may obscure the main findings. Streamline the presentation by summarizing key results in a concise manner (and perhaps move some of the extensive details to an appendix or supplementary material?)
Response 3: Thank you for pointing this out. We agree with this comment. Therefore, we have streamlined the presentation of Section 5.3 by briefly summarizing the main results in the first paragraph and highlighting the conclusions. The details can be found on page 22, line 702 of the revised manuscript.
Comments 4: Redundant and inconsistent descriptions of Ensemble methods (Section 2.3) - The explanations of bagging, boosting, and stacking are repetitive, and the discussion lacks a unified framework for comparing these methods. Revise Section 2.3 to provide a more integrated overview that highlights the unique strengths and trade-offs of each method without unnecessary repetition.
Response 4: Thank you for pointing this out. We agree with this comment. Therefore, we have revised Section 2.3 by removing redundant parts, streamlining the presentation, adjusting Table 2 to highlight the unique advantages and limitations of each method, and citing 6 papers that use ensemble learning and attaching their GitHub links. The details can be found on page 6, line 212 and Table 2 of the revised manuscript.
Comments 5: Lack of Clarity in fusion explanation (Section 2.2) - The description of fusion methods is complex and could be hard for readers to follow, particularly due to the dense technical language. Simplify the language and support the discussion with clear diagrams and concrete examples relevant to space target recognition.
Response 5: Thank you for pointing this out. We agree with this comment. Therefore, we have drew a schematic diagram of the majority voting method, plurality voting method and meta learning method based on the problem of space target recognition, and described the fusion method. The details can be found on page 6, line 203 of the revised manuscript.
Author Response File: Author Response.pdf
Reviewer 4 Report
Comments and Suggestions for AuthorsThere are some comments that authors should consider in order to improve the quality of the paper more.
- There are a few explanations at the captions of the Figures and Tables. It would be good if the authors provided more details in the captions.
- In "Table 2. Comparison of common algorithms for ensemble learning", it would be good if the authors also provided some papers that used these algorithms with their references, the GitHub code if available, and what datasets are utilized.
- Providing the overall block diagram of state-of-the-art space target recognition methods will further help the reader to better understand the technique.
- For the "Table 3. Space target recognition dataset", please provide the public availability link of the dataset. It would be good if the authors also provided some image examples of each dataset and compared them. Discuss the challenges of each dataset.
- Since it is an evaluation paper, authors need to provide details about the evaluation metrics, the metrics that are used in the paper for the experiments and other approaches.
- It would be good that the authors bolded the best performances in the tables.
- Qualitative comparison results of the methods in Table 7 are recommended for the paper. For example, comparing the challenges, showing the failure cases and limitations.
- The authors mainly consider the classical machine learning during evaluation, it would be good if they also compare the recent deep learning methods that their model is available.
- Before the conclusion, it would be good if the authors also listed the unsolved challenges and possible solutions to tackle them.
Author Response
Comments 1: There are a few explanations at the captions of the Figures and Tables. It would be good if the authors provided more details in the captions.
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have reviewed all figures and tables and revised and improved the figure and table titles. The details can be found on line 353, line 433, line 518, line 591, line 622, line 626,and line 714 of the revised manuscript.
Comments 2: In "Table 2. Comparison of common algorithms for ensemble learning", it would be good if the authors also provided some papers that used these algorithms with their references, the GitHub code if available, and what datasets are utilized.
Response 2: Thank you for pointing this out. We agree with this comment. Therefore, we have revised Section 2.3 by removing redundant parts, streamlining the presentation, adjusting Table 2 to highlight the unique advantages and limitations of each method, and citing 6 papers that use ensemble learning and attaching their GitHub links. The details can be found on page 6, line 212 and Table 2 of the revised manuscript.
Comments 3: Providing the overall block diagram of state-of-the-art space target recognition methods will further help the reader to better understand the technique.
Response 3: Thank you for pointing this out. We agree with this comment. Therefore, we have drawn Figure 5 “The overall block diagram of space target recognition methods”, dividing the overall process of space target recognition into six modules: input, data processing, target recognition, post-processing, output, and performance evaluation. The details can be found on page 9, line 269 of the revised manuscript.
Comments 4: For the "Table 3. Space target recognition dataset", please provide the public availability link of the dataset. It would be good if the authors also provided some image examples of each dataset and compared them. Discuss the challenges of each dataset.
Response 4: Thank you for pointing this out. We agree with this comment. Therefore, we have added public links to the datasets and the deficiencies of each dataset in Table 3 “Space target recognition, classification, and pose estimation dataset” and provided image examples of the seven datasets mentioned in the paper in Figure 7. The details can be found on page 12, line 433 and page 13, line 434 of the revised manuscript.
Comments 5: Since it is an evaluation paper, authors need to provide details about the evaluation metrics, the metrics that are used in the paper for the experiments and other approaches.
Response 5: Thank you for pointing this out. We agree with this comment. Therefore, we have added Section 5.2.3 "Performance Evaluation Metrics" to introduce the evaluation indicators used in the paper: precision, recall, F1-score, and accuracy. The details can be found on page 21, line 681 of the revised manuscript.
Comments 6: It would be good that the authors bolded the best performances in the tables.
Response 6: Thank you for pointing this out. We agree with this comment. Therefore, we have put the best performing indicators in bold in Tables 7 “Performance of different machine learning models on simulated data” and Table 8 “Performance of homogeneous and heterogeneous ensemble models”. The details can be found on page 22, line 714 and page 24, line 758 of the revised manuscript.
Comments 7: Qualitative comparison results of the methods in Table 7 are recommended for the paper. For example, comparing the challenges, showing the failure cases and limitations.
Response 7: Thank you for pointing this out. We agree with this comment. Therefore, we have added a qualitative comparison of the experimental results in Table 7 “Performance of different models on simulated data” to highlight the conclusions and mainly analyze the limitations of decision trees and KNN in the field of space target recognition problems. The details can be found on page 22, line 702 of the revised manuscript.
Comments 8: The authors mainly consider the classical machine learning during evaluation, it would be good if they also compare the recent deep learning methods that their model is available.
Response 8: Thank you for pointing this out. We agree with this comment. Therefore, we have added a multi-layer perceptron to the experimental part, compared it with machine learning methods and ensemble learning methods, added a description of the multi-layer perceptron parameter settings, redrawn the ROC curve (Figure 11) and confusion matrix (Figure 12), and analyzed the experimental results. The details can be found on page 21, line 666 and page 22, line 702 of the revised manuscript.
Comments 9: Before the conclusion, it would be good if the authors also listed the unsolved challenges and possible solutions to tackle them.
Response 9: Thank you for pointing this out. We agree with this comment. Therefore, we have added Section 6.1 Challenges and outlook” to summarize and analyze the challenges and possible solutions for space target recognition from five perspectives: imbalanced datasets, selection of basic learners, computational resources and efficiency, establishment of standardized space target recognition datasets, and single scene detection. The details can be found on page 21, line 666 and page 25, line 772 of the revised manuscript.
Author Response File: Author Response.pdf
Round 2
Reviewer 4 Report
Comments and Suggestions for AuthorsThe authors clearly addressed my comments in the revised manuscript. There are some minor comments as follows that authors can consider in order to improve the quality of the paper more:
- For the comment 13, qualitative results. My point was to show some sample image comparison of the methods.
- Similar to visual example images of Figure 7 in the manuscript, it would be good if the authors also prepare another figure containing and specifying the challenges in these datasets.
- The Figure 10 is small and hard to see, please consider enlarging this figure.
- It is recommended instead of using the equation inline of the text in section "5.2.3 Performance Evaluation Metrics", define them in a separate line with numbering.
- In Table 7, for "Execution time" discuss the rationale behind of the resulted values.
Author Response
Comments 1: For the comment 13, qualitative results. My point was to show some sample image comparison of the methods.
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have visualized the detection results, added schematic diagrams of the detection results of various methods, and conducted a brief qualitative analysis. The details can be found on page 23, line 736 of the revised manuscript.
Comments 2: Similar to visual example images of Figure 7 in the manuscript, it would be good if the authors also prepare another figure containing and specifying the challenges in these datasets.
Response 2: Thank you for pointing this out. We agree with this comment. Therefore, we have summarized the challenges faced by space target datasets and added typical example images to intuitively show the impact of space target datasets on target detection. The details can be found on page 34, line 451 of the revised manuscript.
Comments 3: The Figure 10 is small and hard to see, please consider enlarging this figure.
Response 3: Thank you for pointing this out. We agree with this comment. Therefore, we have adjusted the size of Figure 10 appropriately (now Figure 11). The details can be found on page 21, line 659 of the revised manuscript.
Comments 4: It is recommended instead of using the equation inline of the text in section "5.2.3 Performance Evaluation Metrics", define them in a separate line with numbering.
Response 4: Thank you for pointing this out. We agree with this comment. Therefore, we have modified the inline equations in "5.2.3Performance Evaluation Metrics" to be defined in separate lines with numbers. The details can be found on page 22, line 699 of the revised manuscript.
Comments 5: In Table 7, for "Execution time" discuss the rationale behind of the resulted values.
Response 5: Thank you for pointing this out. We agree with this comment. Therefore, we have added an analysis of the principles behind the “execution time” result value based on the qualitative analysis of the results in “5.3.1 Comparative Analysis of Model Classification Performance”. The details can be found on page 23, line 718 of the revised manuscript.
Author Response File: Author Response.pdf