Active Learning in Feature Extraction for Glass-in-Glass Detection
Abstract
:1. Introduction
1.1. Related Work
1.1.1. Challenges in Detecting Defects in Food Products
1.1.2. Quality Control Methods
1.1.3. Detection of Glass Fragments in Glass Packaging
1.1.4. Classification of Inclusions vs. Anomaly Detection
1.1.5. Anomaly Detection
1.1.6. Active Learning
- Active search: The model actively seeks examples from different classes that are most confusing or difficult to classify. By asking questions about new examples, the model can effectively learn to distinguish between different classes more efficiently [59].
- Active reinforcement: The model actively interacts with the environment, making decisions and observing their consequences. Through exploration and selecting actions with the most significant potential for reinforcement, the model can learn effective strategies or policies more quickly [60].
- Active query learning: The model decides which examples it wants to know about and then requests labels for those examples. This process allows for the selective gathering of information, focusing on areas most challenging or relevant to the model. This method has two combinations: one can use data close to the decision boundary (i.e., the most doubtful: usually, near this boundary, there is a mix of good and faulty classes) or data far from the decision boundary (this way, the model reinforces its knowledge based on samples it is most confident about when classifying) [61].
2. Materials and Methods
2.1. Description of the Current System’s Operation
2.1.1. Feature Extractor
2.1.2. Feature Processing
2.1.3. Autoencoder
2.2. Active Learning Feature Extractor in Anomaly Detection
- Selecting samples with anomaly scores close to the threshold.
- Selecting samples with anomaly scores far from the threshold.
The Process of Active Learning
- A clean model (MobileNetV2 or EfficientNetV2) is obtained in the first iteration. The obtained model is without a head due to the mismatch of default output classes. The initial layers are added to the input model (see Figure 2):
- Input layer with the shape of a 224 × 224 image;
- Data augmentation layer (flip and rotate);
- Pixel scaling layer in the range of −1:1.
The final layers are:- Global average pooling;
- Dropout layer;
- Prediction layer.
- At this point, the feature extractor is cut out from the model. This involves removing the last 12 layers, so that the last layer has an output shape of 7 × 7 × 160 for the Mo-bileNetV2 model and 7 × 7 × 232 for the EfficientNetV2 model, making it applicable in the anomaly detector.
- The autoencoder is fitted with a set of non-faulty samples. Then, the regular operation of the system is initiated: the system takes successive samples from the set of jar images, saving those images that meet the distance requirements from the threshold for the anomaly detector score.
- If 20 such samples (those close to threshold or far from the threshold) are collected, fine tuning of the neural network model is initiated. The number 20 was chosen arbitrarily. It had to be small enough to provide material for more training iterations in a relatively small dataset. The process is as follows:
- As part of fine tuning, the previously saved head of the model is reattached to the feature extractor, so the full model is restored.
- The model is pre-trained, where weight changes are allowed only in the last three layers. This sets up the model’s head to enable deeper fine tuning with a prediction layer capable of immediately working on the specified classes (which are only good and defective). The preliminary training takes place on the data collected in point 3.
- Then, the main fine tuning takes place. This time, weight changes were allowed in the number of layers specified by the “number of layers subject to fine-tuning” parameter. Also indicated was the learning rate divider, which determined how many times smaller this coefficient should be compared to the preliminary tuning. The last parameter was the number of epochs of fine tuning.
- After the fine tuning is completed, a new feature extractor is constructed from the newly trained model (by cutting off the top 12 layers). The upper 12 layers are kept separately for the next iteration of fine tuning.
- At this step, each time after fine tuning the feature extractor, the autoencoder model was tuned again to ensure that subsequent predictions on production data incorporate new knowledge gained by the model.
- Further data are collected. If more than 20 samples are collected again, the procedure from points 3–5 is repeated.
2.3. Adopted Metrics and Indicators
2.4. Dataset and Experimental Parameters
3. Results and Discussion
- After each iteration of active learning, the training set was cleared.
- Each iteration of active learning took new data from the system’s predictions and all doubtful data from previous iterations.
- The indicator of correct predictions of defective samples was initially high and remained unchanged.
- The indicator of correct predictions of good samples improved significantly by 30 percentage points.
- It is impossible to unequivocally determine which set of hyperparameters is the best (considering the learning rate, learning rate divider, layers of fine tuning, and threshold tolerance).
- Two combinations achieved maximum indicators after only two fine-tuning iterations.
3.1. Prediction Time and Training Time
3.2. Model Evaluation
3.3. Comparison of MobileNetV2 and EfficientNetV2 Results
3.4. Glass Detection
3.5. Conclusions from Active Learning
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Edwards, M. (Ed.) Detecting Foreign Bodies in Food; Elsevier: Amsterdam, The Netherlands, 2004. [Google Scholar]
- Graves, M.; Smith, A.; Batchelor, B. Approaches to foreign body detection in foods. Trends Food Sci. Technol. 1998, 9, 21–27. [Google Scholar] [CrossRef]
- Rosak-Szyrocka, J.; Abbase, A.A. Quality management and safety of food in HACCP system aspect. Prod. Eng. Arch. 2020, 26, 50–53. [Google Scholar] [CrossRef]
- Tadhg, B.; Sun, D.-W. Improving quality inspection of food products by computer vision–A review. J. Food Eng. 2004, 61, 3–16. [Google Scholar]
- Chen, H.; Liou, B.K.; Dai, F.J.; Chuang, P.T.; Chen, C.S. Study on the risks of metal detection in food solid seasoning powder and liquid sauce to meet the core concepts of ISO 22000: 2018 based on the Taiwanese experience. Food Control 2020, 111, 107071. [Google Scholar] [CrossRef]
- Mettler-Toledo Glass Container X-ray Inspection Systems. Available online: https://www.mt.com/us/en/home/products/Product-Inspection_1/safeline-x-ray-inspection/glasscontainer-x-ray-inspection.html (accessed on 4 April 2024).
- Mohd Khairi, M.T.; Ibrahim, S.; Md Yunus, M.A.; Faramarzi, M. Noninvasive techniques for detection of foreign bodies in food: A review. J. Food Process Eng. 2018, 41, e12808. [Google Scholar] [CrossRef]
- Mi, C.; Kai, C.; Zhiwei, Z. Research on tobacco foreign body detection device based on machine vision. Trans. Inst. Meas. Control 2020, 42, 2857–2871. [Google Scholar]
- Becker, F.; Schwabig, C.; Krause, J.; Leuchs, S.; Krebs, C.; Gruna, R. From visual spectrum to millimeter wave: A broad spectrum of solutions for food inspection. IEEE Antennas Propag. Mag. 2020, 62, 55–63. [Google Scholar] [CrossRef]
- Xie, T.; Li, X.; Zhang, X.; Hu, J.; Fang, Y. Detection of Atlantic salmon bone residues using machine vision technology. Food Control 2021, 123, 107787. [Google Scholar] [CrossRef]
- Zhang, Z.; Luo, Y.; Deng, X.; Luo, W. Digital image technology based on PCA and SVM for detection and recognition of foreign bodies in lyophilized powder. Technol. Health Care 2020, 28, 197–205. [Google Scholar] [CrossRef]
- Zhu, L.; Spachos, P.; Pensini, E.; Plataniotis, K.N. Deep learning and machine vision for food processing: A survey. Curr. Res. Food Sci. 2021, 4, 233–249. [Google Scholar] [CrossRef]
- Hu, J.; Xu, Z.; Li, M.; He, Y.; Sun, X.; Liu, Y. Detection of foreign-body in milk powder processing based on terahertz imaging and spectrum. J. Infrared Millim. Terahertz Waves 2021, 42, 878–892. [Google Scholar] [CrossRef]
- Sun, X.; Cui, D.; Shen, Y.; Li, W.; Wang, J. Non-destructive detection for foreign bodies of tea stalks in finished tea products using terahertz spectroscopy and imaging. Infrared Phys. Technol. 2022, 121, 104018. [Google Scholar] [CrossRef]
- Li, Z.; Meng, Z.; Soutis, C.; Wang, P.; Gibson, A. Detection and analysis of metallic contaminants in dry foods using a microwave resonator sensor. Food Control 2022, 133, 108634. [Google Scholar] [CrossRef]
- Wang, Q.; Hameed, S.; Xie, L.; Ying, Y. Non-destructive quality control detection of endogenous contaminations in walnuts using terahertz spectroscopic imaging. J. Food Meas. Charact. 2020, 14, 2453–2460. [Google Scholar] [CrossRef]
- Voss, J.O.; Doll, C.; Raguse, J.D.; Beck-Broichsitter, B.; Walter-Rittel, T.; Kahn, J. Detectability of foreign body materials using X-ray, computed tomography and magnetic resonance imaging: A phantom study. Eur. J. Radiol. 2021, 135, 109505. [Google Scholar] [CrossRef] [PubMed]
- Vasquez, J.A.T.; Scapaticci, R.; Turvani, G.; Ricci, M.; Farina, L.; Litman, A. Noninvasive inline food inspection via microwave imaging technology: An application example in the food industry. IEEE Antennas Propag. Mag. 2020, 62, 18–32. [Google Scholar] [CrossRef]
- Ricci, M.; Štitić, B.; Urbinati, L.; Di Guglielmo, G.; Vasquez, J.A.T.; Carloni, L.P. Machine-learning-based microwave sensing: A case study for the food industry. IEEE J. Emerg. Sel. Top. Circuits Syst. 2021, 11, 503–514. [Google Scholar] [CrossRef]
- Saeidan, A.; Khojastehpour, M.; Golzarian, M.R.; Mooenfard, M.; Khan, H.A. Detection of foreign materials in cocoa beans by hyperspectral imaging technology. Food Control 2021, 129, 108242. [Google Scholar] [CrossRef]
- Zarezadeh, M.R.; Aboonajmi, M.; Varnamkhasti, M.G.; Azarikia, F. Olive oil classification and fraud detection using E-nose and ultrasonic system. Food Anal. Methods 2021, 14, 2199–2210. [Google Scholar] [CrossRef]
- Alam, S.A.Z.; Jamaludin, J.; Asuhaimi, F.A.; Ismail, I.; Ismail, W.Z.W.; Rahim, R.A. Simulation Study of Ultrasonic Tomography Approach in Detecting Foreign Object in Milk Packaging. J. Tomogr. Syst. Sens. Appl. 2023, 6, 17–24. [Google Scholar]
- Ou, X.; Chen, X.; Xu, X.; Xie, L.; Chen, X.; Hong, Z.; Yang, H. Recent development in x-ray imaging technology: Future and challenges. Research 2021, 2021, 9892152. [Google Scholar] [CrossRef] [PubMed]
- Lim, H.; Lee, J.; Lee, S.; Cho, H.; Lee, H.; Jeon, D. Low-density foreign body detection in food products using single-shot grid-based dark-field X-ray imaging. J. Food Eng. 2022, 335, 111189. [Google Scholar] [CrossRef]
- Li, F.; Liu, Z.; Sun, T.; Ma, Y.; Ding, X. Confocal three-dimensional micro X-ray scatter imaging for non-destructive detecting foreign bodies with low density and low-Z materials in food products. Food Control 2015, 54, 120–125. [Google Scholar] [CrossRef]
- Bauer, C.; Wagner, R.; Leisner, J. Foreign body detection in frozen food by dual energy X-ray transmission. Sens. Transducers 2021, 253, 23–30. [Google Scholar]
- Morita, K.; Ogawa, Y.; Thai, C.N.; Tanaka, F. Soft X-ray image analysis to detect foreign materials in foods. Food Sci. Technol. Res. 2003, 9, 137–141. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, K.; Wang, X.; Sun, Y.; Yang, X.; Lou, X. Recognition of dumplings with foreign body based on X-ray and convolutional neural network. Shipin Kexue Food Sci. 2019, 40, 314–320. [Google Scholar]
- Arsenault, M.; Bentz, J.; Bouchard, J.L.; Cotnoir, D.; Verreault, S.; Maldague, X. Glass fragments detector for a jar filling process. In Proceedings of the Canadian Conference on Electrical and Computer Engineering, Halifax, NS, Canada, 14–17 September 1993. [Google Scholar]
- Schlager, D.; Sanders, A.B.; Wiggins, D.; Boren, W. Ultrasound for the detection of foreign bodies. Ann. Emerg. Med. 1991, 20, 189–191. [Google Scholar] [CrossRef]
- Zhou, X.; Wang, Y.; Xiao, C.; Zhu, Q.; Lu, X.; Zhang, H.; Zhao, H. Automated visual inspection of glass bottle bottom with saliency detection and template matching. IEEE Trans. Instrum. Meas. 2019, 68, 4253–4267. [Google Scholar] [CrossRef]
- Ma, H.M.; Su, G.D.; Wang, J.Y.; Ni, Z. A glass bottle defect detection system without touching. In Proceedings of the International Conference on Machine Learning and Cybernetics, Beijing, China, 4–5 November 2022; Volume 2. [Google Scholar]
- Li, F.; Hang, Z.; Yu, G.; Wei, G.; Xinyu, C. The method for glass bottle defects detecting based on machine vision. In Proceedings of the 29th Chinese Control and Decision Conference (CCDC), Chongqing, China, 28–30 May 2017. [Google Scholar]
- Bosen, Z.; Basir, O.A.; Mittal, G.S. Detection of metal, glass and plastic pieces in bottled beverages using ultrasound. Food Res. Int. 2003, 36, 513–521. [Google Scholar]
- McFarlane, N.J.B.; Bull, C.R.; Tillett, R.D.; Speller, R.D.; Royle, G.J. SE—Structures and environment: Time constraints on glass detection in food materials using Compton scattered X-rays. J. Agric. Eng. Res. 2001, 79, 407–418. [Google Scholar] [CrossRef]
- Strobl, M. Red Wine Bottling and Packaging; Red Wine Technology; Academic Press: Cambridge, MA, USA, 2019; pp. 323–339. [Google Scholar]
- Heuft eXaminer II XAC. Available online: https://heuft.com/en/product/beverage/full-containers/foreign-object-inspection-heuft-examiner-ii-xac-bev (accessed on 4 April 2024).
- Biswajit, J.; Nayak, G.K.; Saxena, S. Convolutional neural network and its pretrained models for image classification and object detection: A survey. Concurr. Comput. Pract. Exp. 2022, 34, e6767. [Google Scholar]
- Xu, S.; Zhang, M.; Song, W.; Mei, H.; He, Q.; Liotta, A. A systematic review and analysis of deep learning-based underwater object detection. Neurocomputing 2023, 527, 204–232. [Google Scholar] [CrossRef]
- Deng, L.; Bi, L.; Li, H.; Chen, H.; Duan, X.; Lou, H.; Liu, H. Lightweight aerial image object detection algorithm based on improved YOLOv5s. Sci. Rep. 2023, 13, 7817. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.; Xie, G.; Wang, J.; Li, S.; Wang, C.; Zheng, F.; Jin, Y. Deep industrial image anomaly detection: A survey. Mach. Intell. Res. 2024, 21, 104–135. [Google Scholar] [CrossRef]
- Liu, Z.; Zhou, Y.; Xu, Y.; Wang, Z. Simplenet: A simple network for image anomaly detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 24 June 2023. [Google Scholar]
- Xie, G.; Wang, J.; Liu, J.; Lyu, J.; Liu, Y.; Wang, C.; Jin, Y. Im-iad: Industrial image anomaly detection benchmark in manufacturing. IEEE Trans. Cybern. 2024, 54, 2720–2733. [Google Scholar] [CrossRef] [PubMed]
- Ravpreet, K.; Singh, S. A comprehensive review of object detection with deep learning. Digit. Signal Process 2023, 132, 103812. [Google Scholar]
- Ezekiel, O.O.; Irhebhude, M.E.; Evwiekpaefe, A.E. A comparative study of YOLOv5 and YOLOv7 object detection algorithms. J. Comput. Soc. Inform. 2023, 2, 1–12. [Google Scholar]
- Sheldon, M.; Agarwal, M. A comparison between VGG16, VGG19 and ResNet50 architecture frameworks for Image Classification. In Proceedings of the 2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON), Bengaluru, India, 19–21 November 2021; Volume 1. [Google Scholar]
- Pin, W.; Fan, E.; Wang, P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognit. Lett. 2021, 141, 61–67. [Google Scholar]
- Liu, Y.; Sun, P.; Wergeles, N.; Shang, Y. A survey and performance evaluation of deep learning methods for small object detection. Expert Syst. Appl. 2021, 172, 114602. [Google Scholar] [CrossRef]
- José, M.; Domingues, I.; Bernardino, J. Comparing vision transformers and convolutional neural networks for image classification: A literature review. Appl. Sci. 2023, 13, 5521. [Google Scholar] [CrossRef]
- Bharadiya, J. Convolutional neural networks for image classification. Int. J. Innov. Sci. Res. Technol. 2023, 8, 673–677. [Google Scholar]
- Gulzar, Y. Fruit image classification model based on MobileNetV2 with deep transfer learning technique. Sustainability 2023, 15, 1906. [Google Scholar] [CrossRef]
- Beggel, L.; Pfeiffer, M.; Bischl, B. Robust anomaly detection in images using adversarial autoencoders. In Proceedings of the Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2019, Würzburg, Germany, 16–20 September 2019; Part I. Springer International Publishing: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
- Chow, J.K.; Su, Z.; Wu, J.; Tan, P.S.; Mao, X.; Wang, Y.H. Anomaly detection of defects on concrete structures with the convolutional autoencoder. Adv. Eng. Inform. 2020, 45, 101105. [Google Scholar] [CrossRef]
- Thudumu, S.; Branch, P.; Jin, J.; Singh, J. A comprehensive survey of anomaly detection techniques for high dimensional big data. J. Big Data 2020, 7, 42. [Google Scholar] [CrossRef]
- Nassif, A.B.; Talib, M.A.; Nasir, Q.; Dakalbab, F.M. Machine learning for anomaly detection: A systematic review. IEEE Access 2021, 9, 78658–78700. [Google Scholar] [CrossRef]
- Ren, P.; Xiao, Y.; Chang, X.; Huang, P.Y.; Li, Z.; Gupta, B.B.; Wang, X. A survey of deep active learning. ACM Comput. Surv. CSUR 2021, 54, 42. [Google Scholar] [CrossRef]
- Cao, X.; Yao, J.; Xu, Z.; Meng, D. Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4604–4616. [Google Scholar] [CrossRef]
- El-Hasnony, I.M.; Elzeki, O.M.; Alshehri, A.; Salem, H. Multi-label active learning-based machine learning model for heart disease prediction. Sensors 2022, 22, 1184. [Google Scholar] [CrossRef]
- Shui, C.; Zhou, F.; Gagné, C.; Wang, B. Deep active learning: Unified and principled method for query and training. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020. [Google Scholar]
- Ménard, P.; Domingues, O.D.; Jonsson, A.; Kaufmann, E.; Leurent, E.; Valko, M. Fast active learning for pure exploration in reinforcement learning. In Proceedings of the International Conference on Machine Learning, Online, 18–24 July 2021. [Google Scholar]
- Punit, K.; Gupta, A. Active learning query strategies for classification, regression, and clustering: A survey. J. Comput. Sci. Technol. 2020, 35, 913–945. [Google Scholar]
- Michał, K.; Malesa, M.; Rapcewicz, J. Ultra-Lightweight Fast Anomaly Detectors for Industrial Applications. Sensors 2023, 24, 161. [Google Scholar] [CrossRef]
Layer Type | Neurons |
---|---|
Input | 7840 |
Dropout | |
Dense | 128 |
BatchNormalization | |
Dense | 64 |
BatchNormalization | |
Dense | 64 |
BatchNormalization | |
Dense | 64 |
BatchNormalization | |
Dense | |
BatchNormalization | 7840 |
Model | MobileNetV2 | MobileNetV2 | MobileNetV2 | MobileNetV2 | ||||
---|---|---|---|---|---|---|---|---|
Data preprocess mode | Thresholded | Thresholded | Thresholded | Thresholded | ||||
Samples to threshold | Close | Far | Far | Close | ||||
Dataset used for each iteration | New | New | New | Appended | ||||
Threshold tolerance | 0.25 | 0.15 | 0.15 | 0.25 | ||||
Base learning rate | 0.0001 | 0.0001 | 0.0001 | 0.001 | ||||
Learning rate divider | 10 | 5 | 5 | 10 | ||||
Layers of fine tuning | 50 | 100 | 150 | 100 | ||||
Metrics | Sensitivity | Specificity | Sensitivity | Specificity | Sensitivity | Specificity | Sensitivity | Specificity |
Base model | 0.7036 | 0.9808 | 0.7036 | 0.9808 | 0.7036 | 0.9808 | 0.7036 | 0.9808 |
Fine-tune iteration 1 | 0.9544 | 0.9744 | 0.9642 | 0.9808 | 0.9805 | 0.9808 | 0.9772 | 0.9744 |
Fine-tune iteration 2 | 1.0000 | 0.9808 | 0.9935 | 0.9808 | 1.0000 | 0.9808 | 0.9967 | 0.9808 |
Fine-tune iteration 3 | 1.0000 | 0.9808 | 1.0000 | 0.9808 | 1.0000 | 0.9808 | 1.0000 | 0.9808 |
Model | MobileNetV2 |
---|---|
Data preprocessing | Thresholded |
Samples to threshold | Far |
Dataset | New |
Threshold tolerance | 0.15 |
Base learning rate | 0.0001 |
Learning rate divider | 5 |
Layers of fine tuning | 150 |
Base Model | Fine-Tuned Model | |||
---|---|---|---|---|
Dataset | Good | Faulty | Good | Faulty |
Mean value | 0.185 | 0.490 | 0.209 | 0.380 |
Standard deviation | 0.030 | 0.119 | 0.023 | 0.071 |
Active Learning Iteration | EfficientNetV2 | MobileNetV2 | ||||
---|---|---|---|---|---|---|
Sensitivity | Specificity | F1-Score | Sensitivity | Specificity | F1-Score | |
0—base model | 0.983 | 0.717 | 0.770 | 0.703 | 0.980 | 0.806 |
1 | 0.980 | 0.685 | 0.749 | 0.954 | 0.974 | 0.951 |
2 | 0.980 | 0.673 | 0.742 | 1 | 0.980 | 0.980 |
3 | 0.980 | 0.673 | 0.742 | 1 | 0.980 | 0.980 |
Batch Size | 10 |
Learning rate—initial training | 0.001 |
Learning rate—fine tuning | 0.0001 |
Layers of fine tuning | 100 |
Epochs of fine tuning | 10 |
MobileNetV2 Samples Close to Threshold | MobileNetV2 Samples Far from Threshold | EfficientNetV2 Samples Close to Threshold | EfficientNetV2 Samples Far from Threshold | |
---|---|---|---|---|
Initial accuracy | 0.62 | 0.62 | 0.80 | 0.62 |
Accuracy after fine-tuning | 0.36 | 0.36 | 0.90 | 0.87 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rapcewicz, J.; Malesa, M. Active Learning in Feature Extraction for Glass-in-Glass Detection. Electronics 2024, 13, 2049. https://doi.org/10.3390/electronics13112049
Rapcewicz J, Malesa M. Active Learning in Feature Extraction for Glass-in-Glass Detection. Electronics. 2024; 13(11):2049. https://doi.org/10.3390/electronics13112049
Chicago/Turabian StyleRapcewicz, Jerzy, and Marcin Malesa. 2024. "Active Learning in Feature Extraction for Glass-in-Glass Detection" Electronics 13, no. 11: 2049. https://doi.org/10.3390/electronics13112049
APA StyleRapcewicz, J., & Malesa, M. (2024). Active Learning in Feature Extraction for Glass-in-Glass Detection. Electronics, 13(11), 2049. https://doi.org/10.3390/electronics13112049