A Review of Space Target Recognition Based on Ensemble Learning
Abstract
:1. Introduction
2. Basic Principles of Ensemble Learning
2.1. Diversity in Ensemble Learning
- Diversity in ensemble learningDiversity measurement methods primarily rely on the statistical analysis of the results. These methods can be categorized into pairwise diversity measures and non-pairwise diversity measures [26]. Pairwise diversity measures evaluate the diversity between pairs of basic learners and then analyze the overall diversity of the ensemble system using their averages. Non-pairwise diversity measures directly compute diversity metrics for the ensemble system. Theoretically, there is no clear correlation between the diversity of basic learners and the accuracy of the ensemble system [27]. Further theoretical research is needed to better define diversity and explore its relationship with ensemble accuracy. A summary of diversity measurement metrics in ensemble learning is presented in Table 1.The following symbols are defined: M denotes the number of the basic learner; and (i, j = 1,2,…, N, i ≠ j) represent two basic learners. is the number of samples correctly classified by both and ; is the number of samples correctly classified by but incorrectly classified by ; is the number of samples incorrectly classified by but correctly classified by ; is the number of samples incorrectly classified by both and . The total number of samples N is given by . (j = 1,2, …, N) is the sample out of N samples; is the number of basic learners that correctly classify the .
- 2.
- Methods for enhancing diversityEnhancing diversity in ensemble systems typically involves addressing three primary dimensions: data, algorithmic parameter, and model architecture.At the data level, the primary methods for enhancing diversity include perturbations to input samples and features. Common techniques for perturbing input samples include resampling, sequential sampling, and hybrid sampling methods. Resampling methods generate different training subsets while sequential sampling methods sample based on the results of the previous learning iteration. Hybrid sampling methods [33] improve model diversity and performance by integrating global and local data characteristics. Perturbing input features involves randomly selecting subsets from the initial attribute set to train diverse basic learners. Rokach et al. [34] proposed a general framework for feature set splitting to optimize decision tree models. For time-series data, Yang et al. [35] introduced a splitting ensemble method based on Hidden Markov models to address issues of model initialization and selection. Additionally, perturbing output representations is another important technique. Methods such as randomizing outputs [36] or using Error-Correcting Output Codes (ECOCs) [37] can enhance predictive accuracy.At the algorithmic parameter level, diversity among basic learners is enhanced by modifying their parameter sets to generate distinct models. In the C4.5 decision tree algorithm, adjusting the confidence factor can generate different decision boundaries [38]. Similarly, multi-core learning generates diversity by tuning the parameters and their combination weights. Nen M et al. [39] explored how different parameter configurations in multi-kernel learning algorithms can produce basic learners with diverse characteristics. Additionally, in neural networks, altering the number of nodes and the network topology can generate different basic learners.At the model structure level, diversity can be enhanced by adjusting the internal or external structures of basic learners. For example, heterogeneous ensembles introduce different types of models to improve both the diversity and performance of the ensemble system. The Mixture of Experts (MoE) model employs a ‘gating network’ to determine the weights of multiple expert models, combining their outputs through a weighted sum based on these weights [14]. Additionally, Wang et al. [40] found that incorporating a moderate number of decision trees into neural network ensemble systems can further enhance their overall performance.
2.2. Fusion Method
- Voting methodVoting methods are primarily used for classification tasks and can be categorized into majority voting, plurality voting, and weighted voting. Majority voting selects a class as the final prediction if it receives more than half of the total votes from the basic learners. Plurality voting, on the other hand, chooses the class with the highest number of votes as the final result. In cases where multiple classes receive the same highest number of votes, a random selection is made. Weighted voting takes into account the predictive performance of each basic learner, and the voting weight of each basic learner is positively correlated with its accuracy.
- Averaging methodAveraging methods are commonly used for regression problems to combine the outputs of basic learners. Simple averaging involves taking the mean of all predictions from the basic learners. In practice, weighted averaging is often more effective; the weights of the basic learner can be dynamically adjusted during the training process to optimally combine the outputs of different basic learners and improve the overall predictive accuracy.
- Meta-learning methodMeta-learning methods, exemplified by stacked generalization, involve using the predictions of basic learners as new features for a secondary learner (meta-learner). This meta-learner learns from the basic learners’ predictions to uncover latent relationships among them, thereby enhancing the model’s generalization ability and predictive accuracy.
2.3. Common Ensemble Methods
2.3.1. Bagging
2.3.2. Boosting
2.3.3. Stacking
3. Space Target Recognition
3.1. Basic Attributes and Characteristics of Space Targets
3.1.1. Basic Attributes of Space Targets
3.1.2. Characteristics of Space Targets
- Limited sample size: Space targets move at high speeds in space, and the imaging conditions are relatively stringent. As a result, both space-based and ground-based equipment can capture only a limited number of images.
- Poor image quality: Distant space targets often appear small and dim. When atmospheric interference or unstable cloud conditions are present, space targets can easily be obscured by background noise.
- Diverse targets: Space contains a wide variety of targets including satellites, space debris, and planets. Each type of target has distinct shapes, sizes, materials, and motion attributes.
- Dynamic changes: Space targets are typically in a state of dynamic change, such as in the orbital variations of satellites and the drift of space debris.
3.2. Hierarchy of Space Target Recognition
4. Advances in Application of Ensemble Learning in Space Target Recognition
4.1. Space Target Recognition Dataset
- BUAA-SID [67]The BUAA-SID dataset is a space target image database established by Beihang University (BUAA). The 3D models of space targets were created using 3ds Max and visible light simulation images were then generated. This dataset contains a large number of space target images based on real scenes, making it suitable for training and testing various space target recognition algorithms. However, the dataset does not include 3D models of the space targets. As the database was developed to meet specific project requirements, its content is not fully publicly available. The dataset includes images of 20 representative satellites, with 460 images generated for each satellite, totaling 9200 images. These images cover 230 sampling viewpoints.
- URSO [68]URSO (Unreal Rendered Spacecraft On-Orbit) is a space mission simulator designed to provide realistic images and depth masks of commonly used spacecraft orbiting the Earth. Its purpose is to create a visually realistic environment with labeled data for tasks such as space rendezvous and docking, as well as for debris removal. The dataset was constructed using Unreal Engine 4 (UE4) as the simulation engine, and the spacecraft models include Soyuz and Dragon. At an altitude within Low Earth Orbit, 5000 viewpoints were randomly sampled across the illuminated surface of the Earth. For each viewpoint, interface images and depth maps were generated, with image resolution set at 1080 × 960 pixels. The URSO dataset comprises 5000 images, which are intended for the training and evaluation of attitude estimation algorithms.
- SPEED [69]Deep learning relies on large annotated datasets, but obtaining ideal images of target spacecraft in space along with precise pose labels is challenging. To address these issues, the Space Laboratory (SLAB) at Stanford University and the Advanced Concepts Team (ACT) at the European Space Agency (ESA) organized the Satellite Pose Estimation Challenge (SPEC). The dataset for this challenge, named SPEED (Satellite Pose Estimation Dataset), is the first publicly available dataset for spacecraft pose estimation. It comprises 14,998 synthetic images and 300 real images, all of which are high-resolution grayscale images. This dataset is intended for training and evaluating deep learning models in space-related tasks.
- A Spacecraft Dataset [70]Dung et al. [70] introduced a spacecraft dataset consisting of both real and synthetic images, designed for tasks such as spacecraft detection, instance segmentation, and part recognition. In constructing the dataset, the spacecraft were decomposed into three components: solar panels, the main body, and antennas. Preprocessing was conducted to remove similar or duplicate images, followed by training a segmentation model to predict initial masks, which were then refined using Polygon-RNN++. The dataset consists of 3117 satellite and space station images captured from space, along with annotations for object bounding boxes, instance masks, and part masks. All images are of the resolution 1280 × 720.
- SPEED+ [71]While the SPEED dataset primarily consists of synthetic images, these images still have a gap compared to real space target images. The SPEED+ dataset is an extended and improved version of the original SPEED dataset, containing 60,000 synthetic images for training and 9531 spacecraft model simulation images. This dataset offers a sufficient quantity and quality of spacecraft images to evaluate and compare the robustness of on-board models.
- Perez [72]To explore the feasibility of using machine learning and deep learning methods for the identification and classification of satellites or space debris, Perez et al. [72] established an experimental setup to generate a series of image datasets for spacecraft and space debris. The dataset consists of both experimental and synthetic images. The experimental dataset comprises 60,460 experimental images generated in the visible and thermal infrared ranges for space target recognition and classification. These images include three types of satellites (Calypso, Cloudsat, and Jason-3) and one type of debris. The synthetic image dataset was created using a graphics engine combined with 3D CAD models. It contains three types of satellites (Calypso, Cloudsat, and Jason-3) and two different debris objects (treated as one category). The relevant parameters of the synthetic images can be adjusted as needed, enabling the generation of unlimited data. Additionally, extra imaging techniques such as blurring and saturation changes can be added during the generation process.
- SPARK [73]The University of Luxembourg has introduced SPARK (Spacecraft Recognition Leveraging Knowledge of Space Environment), a multi-modal image dataset for space targets. This dataset is generated in a real space-simulated environment, with each image containing target annotations in bounding boxes. It includes ten different satellites and five space debris objects (treated as one class), totaling 11 target categories. Each satellite has approximately 12,500 images while each debris object has 5000 images, resulting in roughly 150,000 RGB and depth images altogether.
4.2. Ensemble of Traditional Machine Learning Models for Space Target Recognition
Application | Reference | Year | Model | Dataset |
---|---|---|---|---|
Basic Learner Selection | [79]: SSE algorithm minimizing interval distance. | 2008 | Bagging | UCI |
[80]: Interval definition method considering the performance of basic learner. | 2014 | Bagging | UCI | |
[81]: Interval definition method considering classification confidence. | 2014 | EP-CC | UCI | |
[82]: Subspace partitioning method for basic learners based on confusion matrix. | 2014 | Bagging | UCI, Statlog | |
[83]: Interval definition method integrating basic learner and sample weights. | 2017 | SA, SSE | Custom HRRP Dataset, UCI | |
[84]: Selective ensemble technique based on K-medoids clustering and random reference classifiers. | 2019 | K-medoids, etc. | Custom HRRP Dataset, UCI | |
Feature Extraction | [85]: Feature selection based on two-dimensional convolutional kernel and random sampling. | 2018 | Bagging | MSTAR |
[87]: Feature extraction method combing geometric feature and wavelet moments. | 2015 | Boosting | Custom Space Target Dataset | |
[88]: Small target detection method combing multi-feature extraction and XGBoost. | 2023 | Boosting | Measure Sea Clutter Data | |
[72]: Feature dimensionality reduction based on PCA. | 2021 | SVM, PCA, DNN, etc. | Synthetic Space Target Dataset | |
Imbalance Data | [54]: SMOTEBoost based on synthetic minority class. | 2003 | Boosting | KDD Cup-99, etc. |
[89]: RUSBoost based on random undersampling. | 2010 | Boosting | SP3, etc. | |
[90]: EusBoost based on random undersampling and evolutionary undersampling. | 2013 | Boosting | KEEL | |
[56]: PCBoost based on data synthesis methods. | 2012 | Boosting | UCI | |
[91]: Undersampling method based on K-means. | 2021 | Bagging | Glass5 Shuttle0vs4 | |
[92]: Recognition method for space satellite operation modes based on SMOTE-Tomek hybrid sampling and Random Forest. | 2023 | Bagging | Orbiting scientific satellite telemetry parameters | |
Algorithm Improvement | [93]: Ensemble of Exemplar-SVMs. | 2011 | Exemplar-SVM | PASCAL VOC |
[94]: Data classification strategy based on Bagging. | 2018 | Bagging | Simulated data, real satellite telemetry data | |
[95]: Maritime target recognition based on multi-source information fusion. | 2021 | Bagging | Hypothetical case examples | |
[96]: Bayesian network structure optimization strategy based on ensemble learning. | 2021 | Bagging | ASIA ALARM | |
[97]: Radar target recognition based on XGBoost. | 2023 | Boosting | Custom radar dataset | |
[98]: Small target detection based on XGBoost. | 2023 | Boosting | IPIX |
4.3. Ensemble Deep Learning for Space Target Recognition
Application | Reference | Year | Model | Dataset |
---|---|---|---|---|
CNN | [99]: CNN for spacecraft detection. | 2018 | CNN | Space Target Dataset |
[100]: Two-stage Convolutional Neural Network (T-SCNN) for space target recognition. | 2019 | T-SCNN | Space Target Dataset | |
[101]: Satellite classification and pose regression using transfer learning and data augmentation. | 2020 | CNN | BUAA-SID | |
[102]: LeNet-based CNN for debris classification. | 2020 | CNN | Space debris Simulation images | |
[103]: N-gram-based adaptive Convolutional Neural Network. | 2019 | CNN | Google Earth images | |
[104]: Ensemble learning methods for high-resolution satellite image object detection. | 2022 | Bagging, CNN | Desert environment images | |
[105]: Space target recognition method combing CNN and LSTM. | 2018 | CNN, LSTM | Space target Simulation data | |
[106]: Space target recognition method combing CNN and Transformer. | 2023 | CNN, Transformer | RCS simulation data | |
[107]: Space debris detection and monitoring. | 2025 | Bi-LSTM-CNN | Satellites and debris in Earth’s orbit | |
[108]: Meteor and space debris detection. | 2024 | Refined STACK-CNN | Mini-EUSO session 6 | |
[109]: Space debris detection. | 2023 | SDebrisNet | SDD | |
[110]: Spacecraft and debris recognition. | 2022 | Transformer + CNN | SPARK | |
R-CNN Series | [114]: Two-stage object detection network based on space target characteristics. | 2020 | R-CNN | BUAA-SID, space debris data |
[115]: Improved Mask R-CNN for feature detection and recognition of non-cooperative space targets. | 2020 | Mask R-CNN | Simulated satellite images | |
[116]: Satellite component detection model (RSD) based on R-CNN. | 2020 | R-CNN | Simulated satellite images | |
[117]: Model diversity computation method. | 2017 | Faster R-CNN | COCO | |
[118]: Model diversity computation method. | 2018 | ResNet | PASCAL VOC | |
YOLO Series | [120]: Butterfly automatic detection and classification algorithm based on YOLO ensemble. | 2020 | YOLOv3 | Butterfly Image Dataset |
[121]: Drone image object detection method based on transfer learning and ensemble learning. | 2021 | YOLOv3 | VisDrone, AU-AIR | |
[122]: Chest imaging disease detection method based on dynamic weighted Bagging strategy. | 2022 | Bagging, YOLOv5 | VinDr-CXR | |
[123]: Lightwight YOLOv3-based spacecraft component detection. | 2022 | YOLOv3 | Spacecraft Component Detection Dataset | |
[124]: Target detection method based on transfer learning and YOLOv5. | 2022 | YOLOv5 | RGBT | |
[125]: Optimization of YOLOv7’s ELAN module using RepPoint. | 2023 | YOLOv7 | BUAA-SID, etc. | |
SSD Series | [127]: Ensemble learning-based object detection method based on SSD. | 2020 | SSD | Pascal VOC, MS COCO |
[128]: Ensemble learning fusion strategy and test-time augmentations for object detection. | 2020 | SSD | Pascal VOC, Stomata, etc. | |
[129]: Novel detector RefineDet++ based on single-shot detection. | 2020 | SSD | Pascal VOC, MS COCO | |
[130]: Enhanced SSD with a new feature fusion module. | 2021 | SSD | Pascal VOC, MS COCO | |
Others | [131]: Ensemble of CoAtNet networks. | 2023 | Stacking, CoAtNets | Satellite Simulation Data |
[132]: Directed bounding box fusion method for object detection. | 2023 | OBBStacking | DOTA, FAIR1M |
5. Experiments and Analysis
5.1. Experimental Dataset
5.2. Experimental Design
5.2.1. Feature Extraction
5.2.2. Parameter Settings
5.2.3. Performance Evaluation Metrics
5.3. Experimental Result Analysis
5.3.1. Comparative Analysis of Model Classification Performance
5.3.2. Analysis of Impact of the Number of Basic Learners on Classification Performance
5.3.3. Analysis of Classification Performance of Homogeneous and Heterogeneous Ensemble
6. Discussion
6.1. Challenges and Outlook
- Imbalance in datasets: In this paper, experiments were conducted using balanced datasets, where the Boosting algorithm exhibited relatively poor performance. However, given its common application for handling unbalanced datasets, Boosting may demonstrate superior performance in practical scenarios. Moving forward, the development of effective data resampling techniques and cost-sensitive learning methods represents an important direction for future research.
- Selection of basic learners: The diversity of basic learners is challenging to precisely define and measure, and its correlation with ensemble learning performance remains elusive. Future research should investigate automated selection mechanisms for basic learners to dynamically identify the optimal combinations.
- Computational resources and efficiency: The application of ensemble learning-based space target recognition methods in space is constrained by the limited computational, storage, and energy resources of satellite on-board computers. Additionally, the high latency and narrow bandwidth of satellite communication pose challenges for the real-time data transmission required by these methods. The adaptability and robustness of models may also be compromised in complex and dynamic space environments. To address these limitations, strategies such as model lightweighting, energy optimization, distributed computing, communication optimization, and adaptive learning should be considered.
- Establishment of standardized space target recognition datasets: Currently, the field of space target recognition lacks large-scale annotated datasets. A comprehensive and diverse standardized database would provide a robust foundation for the research and validation of integrated learning algorithms. Additionally, it would establish unified annotation and evaluation benchmarks, thereby advancing the standardization of space target recognition technologies.
- Single-scene detection: The ensemble learning-based space target recognition method proposed in this paper has been evaluated only for single-target scenes and does not account for complex scenarios involving multiple satellites or colliding satellite fragments within an image. In real space environments, multiple floating objects may appear tightly clustered or as a single entity, or a satellite may fragment into multiple parts due to collisions, complicating accurate target identification. Future research should explore multi-target perception and separation techniques such as instance segmentation methods (e.g., Mask R-CNN) or graph neural networks (GNNs) to model spatial relationships between targets. Addressing the multi-target perception challenge will bring the method closer to practical application requirements and provide more robust support for space target monitoring and collision warning.
6.2. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
SSA | Space situational awareness |
EL | Ensemble learning |
PCA | Principal Component Analysis |
EDL | Ensemble Deep Learning |
SGD | Stochastic Gradient Descent |
FEDL | Fast Ensemble Deep Learning |
EEL | Evolutionary Ensemble Learning |
ECOC | Error-Correcting Output Code |
MoE | Mixture of Experts |
CART | Classification and regression trees |
LEO | Low Earth Orbit |
HEO | High Earth Orbit |
GEO | Geostationary Earth Orbit |
CEO | Critical Earth Orbit |
URSO | Unreal Rendered Spacecraft On-Orbit |
SLAB | Space laboratory |
ACT | Advanced concept team |
ESA | European space agency |
SPEC | Satellite Pose Estimation Challenge |
SPEED | Satellite Pose Estimation Dataset |
SPARK | Spacecraft Recognition Leveraging Knowledge of Space Environment |
SVM | Support Vector Machine |
SSE | Static ensemble selection |
DSE | Dynamic ensemble selection |
RRC | Random reference classifier |
SMOTE | Synthetic Minority Oversampling Technique |
ELMs | Extreme learning machines |
MIC | Maximum Information Coefficient |
CNN | Convolutional Neural Network |
SSD | Single-Shot Multibox Detector |
KNN | K-nearest neighbors |
HOG | Histogram of Oriented Gradients |
ROC | Receiver Operating Characteristic |
TPR | True positive rate |
FPR | False positive rate |
AUC | Area Under the Curve |
References
- Hao, Q.; Li, J.; Zhang, M.; Wang, L. Spatial Non-cooperative Target Components Recognition Algorithm Based on Improved YOLOv3. Comput. Sci. 2022, 49, 358–362. [Google Scholar] [CrossRef]
- Liu, Z.; Zhang, H.; Ji, M.; Lu, T.; Zhang, C.; Yu, Y. Lightweight Multi-scale Feature Fusion Enhancement Algorithm for Spatial Non-cooperative Small Target Detection. J. Beijing Univ. Aeronaut. Astronaut. 2024, 1–12. [Google Scholar] [CrossRef]
- Zhao, L.; Peng, Y.; Li, D.; Zhang, Y. Remote sensing image target detection based on Yolo algorithm. J. Adv. Artif. Life Robot. 2021, 2, 134–139. [Google Scholar]
- Garcia-Pedrajas, N.; Hervas-Martinez, C.; Ortiz-Boyer, D. Cooperative coevolution of artificial neural network ensembles for pattern classification. IEEE Trans. Evol. Comput. 2005, 9, 271–302. [Google Scholar] [CrossRef]
- Mendes-Moreira, J.; Soares, C.; Jorge, A.M.; Sousa, J.F.D. Ensemble Approaches for Regression: A Survey. ACM Comput. Surv. 2013, 45, 10.1–10.40. [Google Scholar] [CrossRef]
- Dietterich, T.G. Machine Learning Research: Four Current Directions. Ai Mag. 1997, 18, 97–136. [Google Scholar] [CrossRef]
- Dietterich, T.G. Ensemble Methods in Machine Learning. In Proceedings of the International Workshop on Multiple Classifier Systems, Cagliari, Italy, 9–11 June 2000. [Google Scholar] [CrossRef]
- Schapire, R.E.; Freund, Y.; Bartlett, P.; Lee, W.S. Boosting the Margin: A New explanation for the Effectiveness of Voting Methods. In Proceedings of the Fourteenth International Conference on Machine Learning, Nashville, TN, USA, 8–12 July 1997. [Google Scholar]
- Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar]
- Kleinberg, E.M. Stochastic discrimination. Ann. Math. Artif. Intell. 1990, 1, 207–239. [Google Scholar] [CrossRef]
- James, G.M. Variance and Bias for General Loss Functions. Mach. Learn. 2003, 51, 115–135. [Google Scholar] [CrossRef]
- Dasarathy, B.V.; Sheela, B.V. A composite classifier system design: Concepts and methodology. Proc. IEEE 1979, 67, 708–713. [Google Scholar] [CrossRef]
- Hansen, L.K.; Salamon, P. Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 993–1001. [Google Scholar] [CrossRef]
- Jacobs, R.; Jordan, M.; Nowlan, S.; Hinton, G. Adaptive Mixtures of Local Experts. Neural Comput. 2014, 3, 79–87. [Google Scholar] [CrossRef]
- Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R.E. Experiments with a New Boosting Algorithm Draft—Please Do Not Distribute; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1996. [Google Scholar]
- Ting, K.M.; Witten, I.H. Stacking Bagged and Dagged Models, Proceedings of the International Conference on Machine Learning; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1997. [Google Scholar]
- Ho, T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar] [CrossRef]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Rodríguez, J.J.; Kuncheva, L.I.; Alonso, C.J. Rotation forest: A new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. 2006, 28, 1619–1630. [Google Scholar] [CrossRef]
- Yang, Y.; Lv, H.; Chen, N. A Survey on ensemble learning under the era of deep learning. Artif. Intell. Rev. 2022, 56, 5545–5589. [Google Scholar] [CrossRef]
- Cai, Y.; Zhu, X.-F.; Sun, Z.-L.; Chen, A.-J. Semi-supervised and Ensemble Learning: A Review. Comput. Sci. 2017, 44, 7–13. [Google Scholar] [CrossRef]
- Hu, Y.; Qu, B.; Liang, J.; Wang, J.; Wang, Y. A survey on evolutionary ensemble learning algorithm. Chin. J. Intell. Sci. Technol. 2021, 3, 18–35. [Google Scholar] [CrossRef]
- Huang, G.; Li, Y.; Pleiss, G.; Liu, Z.; Hopcroft, J.E.; Weinberger, K.Q. Snapshot Ensembles: Train 1, get M for free. arXiv 2017, arXiv:1704.00109. [Google Scholar] [CrossRef]
- Zhou, Z.H. When semi-supervised learning meets ensemble learning. Front. Electr. Electron. Eng. China 2011, 6, 6–16. [Google Scholar] [CrossRef]
- Sun, B.; Wang, J.D.; Chen, H.Y.; Wang, Y.T. Diversity measures in ensemble learning. Kongzhi Yu Juece/Control. Decis. 2014, 29, 385–395. [Google Scholar] [CrossRef]
- Kuncheva, L.I. That Elusive Diversity in Classifier Ensembles. In Iberian Conference on Pattern Recognition and Image Analysis; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef]
- Yule, G.U. On the Association of Attributes in Statistics: With Illustrations from the Material of the Childhood Society, &c. Philos. Trans. R. Soc. Lond. 1900, 194, 257–319. [Google Scholar] [CrossRef]
- Kuncheva, L.I.; Whitaker, C.J. Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy. Mach. Learn. 2003, 51, 181–207. [Google Scholar] [CrossRef]
- Fleiss, J.L. Statistical Methods for Rates and Proportions; John Wiley and Sons: New York, NY, USA, 1981; pp. 1644–1645. [Google Scholar]
- Partridge, D.; Krzanowski, W. Software diversity: Practical statistics for its measurement and exploitation. Inf. Softw. Technol. 1997, 39, 707–717. [Google Scholar] [CrossRef]
- Banfield, R.E.; Hall, L.O.; Bowyer, K.W.; Kegelmeyer, W.P. A New Ensemble Diversity Measure Applied to Thinning Ensembles; Springer: Berlin, Germany, 2003. [Google Scholar] [CrossRef]
- Yang, Y.; Jiang, J. Hybrid Sampling-Based Clustering Ensemble With Global and Local Constitutions. IEEE Trans. Neural Networks Learn. Syst. 2016, 27, 952–965. [Google Scholar] [CrossRef]
- Rokach, L.; Maimon, O. Feature set decomposition for decision trees. Intell. Data Anal. 2005, 9, 131–158. [Google Scholar] [CrossRef]
- Yang, Y.; Jiang, J. HMM-based hybrid meta-clustering ensemble for temporal data. Knowl. Based Syst. 2014, 56, 299–310. [Google Scholar] [CrossRef]
- Breiman, L. Randomizing Outputs to Increase Prediction Accuracy. Mach. Learn. 2000, 40, 229–242. [Google Scholar] [CrossRef]
- Dietterich, T.G.; Bakiri, G. Solving Multiclass Learning Problems via Error-Correcting Output Codes. J. Artif. Intell. Research. 1994, 2, 263–286. [Google Scholar] [CrossRef]
- Lee, S.J.; Xu, Z.; Li, T.; Yang, Y. A novel bagging C4.5 algorithm based on wrapper feature selection for supporting wise clinical decision making. J. Biomed. Inform. 2018, 78, 144–155. [Google Scholar] [CrossRef] [PubMed]
- Nen, M.; Alpayd Ethem, N. Multiple Kernel Learning Algorithms. J. Mach. Learn. Res. 2011, 12, 2211–2268. [Google Scholar]
- Wang, W.; Jones, P.; Partridge, D. Diversity between Neural Networks and Decision Trees for Building Multiple Classifier Systems. In Multiple Classifier Systems. MCS 2000; Lecture Notes in Computer Science; Springer: Berlin, Heidelberg, 2000; Volume 1857. [Google Scholar] [CrossRef]
- Ji-Wei, X.U.; Yu, Y. A survey of ensemble learning approaches. J. Yunnan Univ. (Nat. Sci. Ed.) 2018, 40, 1082–1092. [Google Scholar]
- Tuysuzoglu, G.; Birant, D. Enhanced Bagging (eBagging): A Novel Approach for Ensemble Learning. Int. Arab. J. Inf. Technol. 2020, 17, 515–528. [Google Scholar] [CrossRef]
- Wang, S.; Yao, X. Diversity analysis on imbalanced data sets by using ensemble models. In Proceedings of the IEEE Symposium on Computational Intelligence & Data Minin, Nashville, TN, USA, 30 March–2 April 2009; IEEE: Piscataway, NJ, USA. [Google Scholar] [CrossRef]
- Blaszczynski, J.; Stefanowski, J. Neighbourhood sampling in bagging for imbalanced data. Neurocomputing 2015, 150, 529–542. [Google Scholar] [CrossRef]
- Ye, Y.; Wu, Q.; Huang, J.Z.; Ng, M.K.; Li, X. Stratified sampling for feature subspace selection in random forests for high dimensional data. Pattern Recognit. 2013, 46, 769–787. [Google Scholar] [CrossRef]
- Friedman, J.H. Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Ke, G.; Qi, M.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T. Lightgbm: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems; Springer: Berlin, Germany, 2017; pp. 1–9. [Google Scholar]
- Chen, T.; Guestrin, C. XGBoost: A Scalable Tree Boosting System; ACM: New York, NY, USA, 2016. [Google Scholar] [CrossRef]
- Friedman, J.H. Stochastic gradient boosting. Comput. Stat. Data Anal. 2002, 38, 367–378. [Google Scholar] [CrossRef]
- Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. CatBoost: Unbiased boosting with categorical features. arXiv 2017, arXiv:1706.09516. [Google Scholar] [CrossRef]
- Khoshgoftaar, T.; Geleyn, E.; Nguyen, L.; Bullard, L. Cost: Misclassification Cost-Sensitive Boosting. In Proceedings of the 16th International Conference on Machine Learning, San Francisco, CA, USA, 27–30 June 1999; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1999. [Google Scholar] [CrossRef]
- Song, J.; Lu, X.; Wu, X. An Improved AdaBoost Algorithm for Unbalanced Classification Data. In Proceedings of the FSKD 2009, 6th International Conference on Fuzzy Systems and Knowledge Discovery, Tianjin, China, 14–16 August 2009; IEEE Computer Society: Washington, DC, USA; Volumes 6. [Google Scholar] [CrossRef]
- Joshi, M.V.; Kumar, V.; Agarwal, R.C. Evaluating Boosting Algorithms to Classify Rare Classes: Comparison and Improvements. In Proceedings of the IEEE International Conference on Data Mining, San Jose, CA, USA, 29 November–2 December 2001; IEEE Xplore: Piscataway, NJ, USA. [Google Scholar] [CrossRef]
- Chawla, N.V.; Lazarevic, A.; Hall, L.O.; Bowyer, K.W. SMOTEBoost: Improving Prediction of the Minority Class in Boosting; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar] [CrossRef]
- Seiffert, C.; Khoshgoftaar, T.M.; Hulse, J.V.; Napolitano, A. RUSBoost: Improving Classification Performance When Training Data is Skewed. In Proceedings of the International Conference on Pattern Recognition, Tampa, FL, USA, 8–11 December 2008; IEEE: Piscataway, NJ, USA. [Google Scholar] [CrossRef]
- Li, X.F.; Li, J.; Dong, Y.F.; Qu, C.W. A New Learning Algorithm for Imbalanced Data—PCBoost. Chin. J. Comput. 2012, 35, 202–209. [Google Scholar] [CrossRef]
- Shunmugapriya, P.; Kanmani, S. Optimization of stacking ensemble configurations through Artificial Bee Colony algorithm. Swarm Evol. Comput. 2013, 12, 24–32. [Google Scholar] [CrossRef]
- Ledezma, A.; Aler, R.; Sanchis, A.; Borrajo, D. GA-stacking: Evolutionary stacked generalization. Intell. Data Anal. 2010, 14, 89–119. [Google Scholar] [CrossRef]
- Virgolin, M. Genetic Programming is Naturally Suited to Evolve Bagging Ensembles. In Proceedings of the Genetic and Evolutionary Computation Conference 2021, Lille, France, 10–14 July 2021. [Google Scholar] [CrossRef]
- He, Y.; Yu, F.; Song, P. Bagging R-CNN: Ensemble for Object Detection in Complex Traffic Scenes. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing 2023, Rhodes Island, Greece, 4–10 June 2023 . [Google Scholar] [CrossRef]
- Parente, D.J. PolyBoost: An enhanced genomic variant classifier using extreme gradient boosting. Proteom. Clin. Appl. 2021, 15, 1900124. [Google Scholar] [CrossRef] [PubMed]
- Hammou, D.; Fezza, S.A.; Hamidouche, W. EGB: Image Quality Assessment based on Ensemble of Gradient Boosting. In Proceedings of the Computer Vision and Pattern Recognition, Virtual Conference, 19–25 June 2021; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar] [CrossRef]
- Ren, L.; Zhang, H.; Seklouli, A.S.; Wang, T.; Bouras, A. Stacking-based multi-objective ensemble framework for prediction of hypertension. Expert Syst. Appl. 2023, 215, 119351. [Google Scholar] [CrossRef]
- Akpokiro, V.; Martin, T.; Oluwadare, O. EnsembleSplice: Ensemble deep learning model for splice site prediction. BMC Bioinform. 2022, 23, 413. [Google Scholar] [CrossRef]
- Ma, J. Study of Feature Extraction and Recognition Method of Space Radar Target; National University of Defense Technology: Changsha, China, 2006. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar] [CrossRef]
- Zhang, H.; Liu, Z.; Jiang, Z.; An, M.; Zhao, D. BUAA-SID1.0 Space Object Image Dataset. Spacecr. Recovery Remote Sens. 2010, 31, 65–71. [Google Scholar]
- Proenca, P.F.; Gao, Y. Deep Learning for Spacecraft Pose Estimation from Photorealistic Rendering. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 6007–6013. [Google Scholar] [CrossRef]
- Kisantal, M.; Sharma, S.; Park, T.H.; Izzo, D.; Märtens, M.; D’Amico, S. Satellite Pose Estimation Challenge: Dataset, Competition Design and Results. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 4083–4098. [Google Scholar] [CrossRef]
- Hoang, D.A.; Chen, B.; Chin, T.J. A Spacecraft Dataset for Detection, Segmentation and Parts Recognition. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Virtual Conference, 9–25 June 2021. [Google Scholar] [CrossRef]
- Park, T.H.; Mrtens, M.; Lecuyer, G.; Izzo, D.; D’Amico, S. SPEED+: Next-Generation Dataset for Spacecraft Pose Estimation across Domain Gap. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022. [Google Scholar] [CrossRef]
- Perez, M.D.; Musallam, M.A.; Garcia, A.; Ghorbel, E.; Ismaeil, K.A.; Aouada, D.; Henaff, P.L. Detection a Identification of On-Orbit Objects Using Machine Learning. In Proceedings of the 8th European Conference on Space Debris, Virtual, 20–23 April 2021. [Google Scholar]
- Musallam, M.A.; Gaudilliere, V.; Ghorbel, E.; Ismaeil, K.A.; Perez, M.D.; Poucet, M. Spacecraft Recognition Leveraging Knowledge of Space Environment: Simulator, Dataset, Competition Design and Analysis. In Proceedings of the 2021 IEEE International Conference on Image Processing Challenges (ICIPC), Anchorage, AK, USA, 19–22 September 2021; pp. 11–15. [Google Scholar] [CrossRef]
- Khalil, M.; Fantino, E.; Liatsis, P. Evaluation of Oversampling Strategies in Machine Learning for Space Debris Detection. In Proceedings of the2019 IEEE International Conference on Imaging Systems and Techniques (IST), Abu Dhabi, United Arab Emirates, 8–10 December 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
- Hefele, J.D.; Bortolussi, F.; Zwart, S.P. Identifying Earth-impacting asteroids using an artificial neural network. Astron. Astrophys. 2020, 634, A45. [Google Scholar] [CrossRef]
- Reza, M. Galaxy morphology classification using automated machine learning. Astron. Comput. 2021, 37, 100492. [Google Scholar] [CrossRef]
- Carrasco, D.; Barrientos, L.F.; Pichara, K.; Anguita, T.; Murphy, D.N.A.; Gilbank, D.G.; Gladders, M.D.; Yee, H.K.C.; Hsieh, B.C.; López, S. Photometric Classification of quasars from RCS-2 using Random Forest. Astron. Astrophys. 2015, 584, A44. [Google Scholar] [CrossRef]
- Zhou, Z.H.; Wu, J.; Tang, W. Ensembling neural networks: Many could be better than all. Artif. Intell. 2002, 137, 239–263. [Google Scholar] [CrossRef]
- Martínez-Muñoz, G.; Hernandez-Lobato, D.; Suarez, A. An Analysis of Ensemble Pruning Techniques Based on Ordered Aggregation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 245. [Google Scholar] [CrossRef] [PubMed]
- Hu, Q.; Li, L.; Wu, X.; Schaefer, G.; Yu, D. Exploiting diversity for optimizing margin distribution in ensemble learning. Knowl. Based Syst. 2014, 67, 90–104. [Google Scholar] [CrossRef]
- Li, L.; Hu, Q.; Wu, X.; Yu, D. Exploration of classification confidence in ensemble learning. Pattern Recognit. 2014, 47, 3120–3131. [Google Scholar] [CrossRef]
- Bi, K.; Wang, X.D.; Yao, X.; Zhou, J.D. Adaptively Selective Ensemble Algorithm Based on Bagging and Confusion Matrix. Acta Electron. Sin. 2014, 42, 711–716. [Google Scholar] [CrossRef]
- Xueman, F.; Shengliang, H.; Jingbo, H. Target recognition method for maritime surveillance radars based on ensemble margin optimization. J. Huazhong Univ. Sci. Technol. (Nat. Sci. Ed.) 2017, 45, 73–79. [Google Scholar]
- Fan, X.; Hu, S.; He, J. A dynamic selection ensemble method for target recognition based on clustering and randomized reference classifier. Int. J. Mach. Learn. Cybern. 2019, 10, 515–525. [Google Scholar] [CrossRef]
- Gu, Y.; Xu, Y. Fast SAR target recognition based on random convolution features and ensemble extreme learning machines. Guangdian Gongcheng/Opto-Electron. Eng. 2018, 45, 170432-1. [Google Scholar] [CrossRef]
- Guan, D.; Yuan, W.; Lee, Y.K.; Najeebullah, K.; Rasel, M.K. A Review of Ensemble Learning Based Feature Selection. IETE Tech. Rev. 2014, 31, 190–198. [Google Scholar] [CrossRef]
- Lei, L.I.; Yue-Mei, R. Space Target Recognition Method Based on Improved Adaboost Algorithm. Comput. Syst. Appl. 2015, 24, 202–205. [Google Scholar]
- Xu, S.; Chen, K.; Bai, X.; Shui, P. A Method for Detecting Sea-Surface Floating Small Targets Based on Multi-Feature and Ensemble Learning. Shaanxi Province: CN202010544792.4, 31 March 2023. [Google Scholar]
- Seiffert, C.; Khoshgoftaar, T.M.; Van Hulse, J.; Napolitano, A. RUSBoost: A Hybrid Approach to Alleviating Class Imbalance. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2010, 40, 185–197. [Google Scholar] [CrossRef]
- Galar, M.; Fernández, A.; Barrenechea, E.; Herrera, F. EUSBoost: Enhancing ensembles for highly imbalanced data-sets by evolutionary undersampling. Pattern Recognit. 2013, 46, 3460–3471. [Google Scholar] [CrossRef]
- Xiao, L.; Han, L.; Wei, P.-F.; Zheng, X.-H.; Zhang, S.; Wu, F. Bagging Ensemble Learning Based Multiset Class-imbalanced Learning. Comput. Technol. Dev. 2021, 31, 1–6. [Google Scholar]
- Gao, L.; Chen, Z.; Guo, G.; Wang, C. Recognition of Working Pattern of Space Science Satellite Based on Ensemble Learning. Chin. J. Space Sci. 2023, 43, 768–779. (In Chinese) [Google Scholar] [CrossRef]
- Malisiewicz, T.; Gupta, A.; Efros, A.A. Ensemble of Exemplar-SVMs for Object Detection and Beyond. In 2011 International Conference on Computer Vision; IEEE: Piscataway, NJ, USA, 2011. [Google Scholar] [CrossRef]
- Shi, X.; Pang, J.; Zhang, X.; Peng, Y.; Liu, D. Satellite big data analysis based on bagging extreme learning machine. Yi Qi Yi Biao Xue Bao/Chin. J. Sci. Instrum. 2018, 39, 81–91. [Google Scholar] [CrossRef]
- Zhou, G.; Qu, Y.; Zhang, S. Application of Ensemble Learning in Multi-Source Information Fusion and Target Identification. J. Ordnance Equip. Eng. 2021, 42, 166–169. [Google Scholar] [CrossRef]
- Wang, S.; Qin, B. Bayesian Network Structure Learning by Ensemble Learning and Feedback Strategy. Chin. J. Comput. 2021, 44, 1051–1063. [Google Scholar]
- Zheng, Z.; Sun, S.; Liu, F. Radar Target Recognition Method Based on XGBoost. Anhui Provincial People’s Government, Chinese Society for Simulation. In Proceedings of the 35th China Simulation Conference, Hefei, China,, 14 October 2023; School of Software Engineering, South China University of Technology: Guangdong, China; Shanghai Aerospace System Engineering Institute: Shanghai, China, 2023; Volume 9. [Google Scholar] [CrossRef]
- Shi, S.; Li, J.; Li, D.; Wu, X. Sea⁃Surface Small Target Detection Based on XGBoost with Dual False Alarm Control. Radar Sci. Technol. 2023, 21, 314–323. [Google Scholar] [CrossRef]
- Yan, Z.; Song, X.; Zhong, H. Spacecraft Detection Based on Deep Convolutional Neural Network; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar] [CrossRef]
- Wu, T.; Yang, X.; Song, B.; Wang, N.; Gao, X.; Kuang, L. T-SCNN: A Two-Stage Convolutional Neural Network for Space Target Recognition. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar] [CrossRef]
- Afshar, R.; Lu, S. Classification and Recognition of Space Debris and Its Pose Estimation Based on Deep Learning of CNNs; Springer: Cham, Switzerland, 2020. [Google Scholar] [CrossRef]
- Xi, J.; Xiang, Y.; Ersoy, O.K.; Cong, M.; Wei, X.; Gu, J. Space Debris Detection Using Feature Learning of Candidate Regions in Optical Image Sequences. IEEE Access 2020, 8, 150864–150877. [Google Scholar] [CrossRef]
- Bapu, J.J.; Florinabel, D.J.; Robinson, Y.H.; Julie, E.G.; Kumar, R.; Ngoc, V.T.N.; Son, L.H.; Tuan, T.M.; Giap, C.N. Adaptive convolutional neural network using N-gram for spatial object recognition. Earth Sci. Inform. 2019, 12, 525–540. [Google Scholar] [CrossRef]
- Vilhelm, A.; Limbert, M.; Clément, A.; Ceillier, T. Ensemble Learning techniques for object detection in high-resolution satellite images. arXiv 2022, arXiv:2202.10554. [Google Scholar] [CrossRef]
- Zhang, Y.; Wu, Z.; Wei, S.; Zhang, Y. Spatial Target Recognition Based on CNN and LSTM. High-tech Industrialization Research Council of China Intelligent In-formation Processing Industrialization Branch. In Proceedings of the 12th National Conference on Signal and Intelligent Information Processing and Application, Hangzhou, China, 19 October 2018; School of Electronics and Information Engineering, Beihang University: Beijing, China, 2018; Volume 4. [Google Scholar]
- Zhou, Y.; Wu, R.; Pan, Z.; Pan, X. Spatial Target Recognition Based on Transformer and CNN. Chinese Institute of Electronics. In Proceedings of the 18th National Conference on Radio Propagation, Qingdao, China, 24 September 2023; Beijing Institute of Technology: Beijing, China, 2023; Volume 3. [Google Scholar] [CrossRef]
- Priyadarshini, I. Enhanced Space Debris detection and monitoring using a hybrid Bi-LSTM-CNN and Bayesian Optimization. Artif. Intell. Appl. 2025, 3, 43–55. [Google Scholar] [CrossRef]
- Olivi, L.; Montanaro, A.; Bertaina, M.E.; Coretti, A.G.; Barghini, D.; Battisti, M. Refined STACK-CNN for Meteor and Space Debris Detection in Highly Variable Backgrounds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 10432–10453. [Google Scholar] [CrossRef]
- Tao, J.; Cao, Y.; Ding, M. SDebrisNet: A Spatial–Temporal Saliency Network for Space Debris Detection. Appl. Sci. 2023, 13, 4955. [Google Scholar] [CrossRef]
- AlDahoul, N.; Karim, H.A.; Momo, M.A. RGB-D based multi-modal deep learning for spacecraft and debris recognition. Sci. Rep. 2022, 12, 3924. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar] [CrossRef]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Yang, X.; Wu, T.; Wang, N.; Huang, Y.; Song, B.; Gao, X. HCNN-PSI: A hybrid CNN with partial semantic information for space target recognition. Pattern Recognit. 2020, 108, 107531. [Google Scholar] [CrossRef]
- Li, L.; Zhang, T. Feature detection and recognition of spatial noncooperative objects based on deep learning. CAAI Trans. Intell. Syst. 2020, 15, 1154–1162. [Google Scholar] [CrossRef]
- Chen, Y.; Gao, J.; Zhang, K. R-CNN-Based Satellite Components Detection in Optical Images. Int. J. Aerosp. Eng. 2020, 2020, 8816187. [Google Scholar] [CrossRef]
- Huang, J.; Rathod, V.; Sun, C.; Zhu, M.; Korattikara, A.; Fathi, A. Speed/accuracy trade-offs for modern convolutional object detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE Xplore: Piscataway, NJ, USA, 2017. [Google Scholar] [CrossRef]
- Lee, J.; Lee, S.K.; Yang, S.I. An ensemble method of cnn models for object detection. In Proceedings of the 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 17–19 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 898–901. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the Computer Vision & Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016. [Google Scholar] [CrossRef]
- Liang, B.; Wu, S.; Xu, K.; Hao, J. Butterfly Detection and Classification Based on Integrated YOLO Algorithm. In Genetic and Evolutionary Computing, Proceedings of the Thirteenth International Conference on Genetic and Evolutionary Computing, Qingdao, China, 1–3 November 2019; Springer: Singapore, 2020. [Google Scholar] [CrossRef]
- Walambe, R.; Marathe, A.; Kotecha, K. Multiscale Object Detection from Drone Imagery Using Ensemble Transfer Learning. Drones 2021, 5, 66. [Google Scholar] [CrossRef]
- Xiang-Jiu, C.; Ying-Jie, Y.; Quan-Le, L. Enhanced Bagging ensemble learning and multi-target detection algorithm. J. Jilin Univ. (Eng. Technol. Ed.) 2022, 52, 2916–2923. [Google Scholar] [CrossRef]
- Yuan, M.; Zhang, G.; Yu, Z.; Wu, Y.; Jin, Z. Spacecraft components detection based on a lightweight YOLOv3 model. In Proceedings of the 2022 IEEE 10th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 17–19 June 2022; IEEE: Piscataway, NJ, USA, 2022; Volume 10, pp. 1968–1973. [Google Scholar] [CrossRef]
- Mantau, A.J.; Widayat, I.W.; Leu, J.-S.; Köppen, M. A Human-Detection Method Based on YOLOv5 and Transfer Learning Using Thermal Image Data from UAV Perspective for Surveillance System. Drones 2022, 6, 290. [Google Scholar] [CrossRef]
- Tang, Q.; Li, X.; Xie, M.; Zhen, J. Intelligent Space Object Detection Driven by Data from Space Objects. Appl. Sci. 2024, 14, 333. [Google Scholar] [CrossRef]
- Wei, L.; Dragomir, A.; Dumitru, E.; Christina, S.; Scott, R.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
- Xu, J.; Wang, W.; Wang, H.; Guo, J. Multi-model ensemble with rich spatial information for object detection. Pergamon 2020, 99, 107098. [Google Scholar] [CrossRef]
- Casado-Garcia, A.; Heras, J. Ensemble Methods for Object Detection. In Proceedings of the 24th European Conference on Artificial Intelligence: ECAI 2020, Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020), Santiago de Compostela, Spain, 29 August–8 September 2020. [Google Scholar]
- Zhang, S.; Wen, L.; Lei, Z.; Li, S.Z. RefineDet++: Single-Shot Refinement Neural Network for Object Detection; IEEE: Piscataway, NJ, USA, 2021. [Google Scholar] [CrossRef]
- Zhang, H.; Hong, X.G.; Zhu, L. Detecting Small Objects in Thermal Images Using Single-Shot Detector. Autom. Control. Comput. Sci. 2021, 55, 202–211. [Google Scholar] [CrossRef]
- Aldahoul, N.; Karim, H.A.; Momo, M.A.; Escobara, F.I.F.; Tan, M.J.T. Space Object Recognition With Stacking of CoAtNets Using Fusion of RGB and Depth Images. IEEE Access 2023, 11, 5089–5109. [Google Scholar] [CrossRef]
- Lin, H.; Sun, C.; Liu, Y. OBBStacking: An ensemble method for remote sensing object detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 2112–2120. [Google Scholar] [CrossRef]
Categories | Metrics | Description | Equation | Range | Value at Maximum Diversity |
---|---|---|---|---|---|
Pairwise Diversity Measures | Q [28] | Classification consistency | [−1, 1] | −1 | |
[29] | Classification consistency | [−1, 1] | −1 | ||
diss | Disagreement count | [0, 1] | 1 | ||
DF | Joint error rate | [0, 1] | 0 | ||
Non-pairwise Diversity Measures | KW [29] | Label Variance | [0, 1] | 1/4 | |
Kappa [30] | Inter-classifier consistency | [−1, 1] | −1 | ||
E | Degree of consensus or disagreement | [0, 1] | 1 | ||
Variance in proportion of correct classification | [0, 1] | 0 | |||
GD [31] | Classification consistency | —— | 0 or 1 | 1 | |
PCDM [32] | Correct classification proportion | —— | —— | —— |
Method | Core Idea | Training Method | Unique Advantages | Limitations | Application |
---|---|---|---|---|---|
Bagging | Generates multiple sub-datasets via bootstrap sampling; trains basic learners in parallel. | Parallel training | Reduces variance, prevents overfitting, and is suitable for high-variance models. | Less effective for high-bias models; higher computational cost. | [59]: Improving the performance of genetic programming. (https://github.com/marcovirgolin/2SEGP, accessed on 25 March 2025.) [60]: Object detection in complex traffic scenes. |
Boosting | Iteratively adjusts sample weights to transform weak learners into strong learners. | Sequential training | Reduces bias and improves accuracy; effective for imbalanced data. | Prone to overfitting; sensitive to noisy data. | [61]: Genomic variant classifier. (https://github.com/djparente/polyboost, accessed on 25 March 2025.) [62]: Image quality assessment. (https://github.com/Dounia18/EGB, accessed on 25 March 2025.) |
Stacking | Combines predictions from multiple base learners as new features to train a meta-learner. | Hierarchical training | Enhances generalization by leveraging diverse base learners; suitable for complex tasks. | High computational cost; complex model selection; risk of overfitting. | [63]: Prediction of hypertension. [64]: Splice site prediction. (https://github.com/OluwadareLab/EnsembleSplice, accessed on 25 March 2025.) |
Dataset | Year | Image Type | Resolution | Target | Category Count | Sample Count | Application | Disadvantage | Links |
---|---|---|---|---|---|---|---|---|---|
BUAA-SID 1.0 [67] | 2010 | Synthetic | 320 × 240 | Satellite | 20 | 9200 | Space target classification and identification | Limited number of satellites; synthetic images need improved realism | — |
URSO [68] | 2019 | Synthetic | 1280 × 960 | Satellite | 2 | 5000 | Spacecraft attitude estimation | No segmentation annotations provided | https://zenodo.org/records/3279632 (Accessed on 25 March 2025) |
SPEED [69] | 2019 | Synthetic & Real | 1920 × 1200 | Spacecraft | 1 | 15,300 | Non-cooperative satellite attitude estimation | Reliance on synthetic images for training, differing from real-space images | https://zenodo.org/records/6327547 (Accessed on 25 March 2025) |
A Spacecraft Dataset [70] | 2021 | Synthetic & Real | 1280 × 720 | Satellite | — | 3117 | Spacecraft target detection, segmentation, and partial recognition | Limited dataset size; fixed image resolution | https://github.com/Yurushia1998/SatelliteDataset (Accessed on 25 March 2025) |
SPEED+ [71] | 2021 | Synthetic & Real | 1920 × 1200 | Spacecraft | 1 | 69,531 | Visual spacecraft attitude estimation and relative navigation | Contains only single-target images | https://kelvins.esa.int/pose-estimation-2021/ (Accessed on 25 March 2025) |
Perez [72] | 2021 | Synthetic | 70,960 | Satellite, Debris | 5 | — | Space debris detection and tracking; Satellite classification; | Single background; low variability in size and orientation | — |
SPARK [73] | 2021 | Synthetic | — | Satellite, Debris | 11 | ~150,000 | Space target recognition; | Need for periodic updates due to increasing new space target types | https://cvi2.uni.lu/spark-2021/ (Accessed on 25 March 2025) |
Category | Detailed Information |
---|---|
CPU | 8-core Ryzen 7 6800H 3.2 GHz |
Memory | 16 GByte |
Storage | 512 GByte |
Operating system | Windows 11 Home Chinese Edition |
System type | x64-based PC |
Programming language | Python |
Programming tool | Visual Studio Code |
Model | Precision | Recall | F1-Score | Accuracy Score | Execution Time |
---|---|---|---|---|---|
SVM | 0.97512 | 0.97496 | 0.97504 | 0.97500 | 0.9712 |
Decision tree | 0.93730 | 0.93725 | 0.93720 | 0.93722 | 0.3871 |
KNN | 0.93004 | 0.92413 | 0.92508 | 0.92556 | 0.4252 |
MLP | 0.98451 | 0.98444 | 0.98441 | 0.98444 | 2.4592 |
AdaBoost | 0.98010 | 0.97910 | 0.97951 | 0.97944 | 26.7865 |
Random Forest | 0.97903 | 0.97887 | 0.97895 | 0.97889 | 3.0597 |
Stacking | 0.98740 | 0.98718 | 0.98729 | 0.98722 | 12.3631 |
Model | Ensemble Method | Basic Learners | Precision | Recall | F1-Score | Accuracy Score |
---|---|---|---|---|---|---|
AdaBoost | Homogeneous | Decision tree | 0.98010 | 0.97910 | 0.97951 | 0.97944 |
Random Forest | Homogeneous | Decision tree | 0.97903 | 0.97887 | 0.97895 | 0.97889 |
Stacking | Homogeneous | Decision tree, LogisticRegression | 0.98740 | 0.98718 | 0.98729 | 0.98722 |
AdaBoost | Heterogeneous | SVM, KNN, decision tree, LogisticRegression, RandomForest | 0.91315 | 0.90944 | 0.90916 | 0.90944 |
Bagging | Heterogeneous | SVM, decision tree | 0.96278 | 0.96278 | 0.96275 | 0.96278 |
Stacking | Heterogeneous | SVM, RandomForest, LogisticRegression | 0.97778 | 0.97778 | 0.97778 | 0.98778 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, S.; Zhao, D.; Hong, H.; Sun, K. A Review of Space Target Recognition Based on Ensemble Learning. Aerospace 2025, 12, 278. https://doi.org/10.3390/aerospace12040278
Wang S, Zhao D, Hong H, Sun K. A Review of Space Target Recognition Based on Ensemble Learning. Aerospace. 2025; 12(4):278. https://doi.org/10.3390/aerospace12040278
Chicago/Turabian StyleWang, Shiyan, Danpu Zhao, Haikun Hong, and Kexian Sun. 2025. "A Review of Space Target Recognition Based on Ensemble Learning" Aerospace 12, no. 4: 278. https://doi.org/10.3390/aerospace12040278
APA StyleWang, S., Zhao, D., Hong, H., & Sun, K. (2025). A Review of Space Target Recognition Based on Ensemble Learning. Aerospace, 12(4), 278. https://doi.org/10.3390/aerospace12040278