Blood Stain Classification with Hyperspectral Imaging and Deep Neural Networks
Abstract
:1. Introduction
- We perform a study of blood stain classification from hyperspectral images with deep neural networks. To the best of the authors’ knowledge, this study is the first of its kind.
- Through the presented case study, we investigate the performance of deep neural networks on a real-life hyperspectral dataset that complements the typical tests done on remote sensing images e.g., Indian Pines or University of Pavia images. While the remote sensing transductive scenario is popular, we argue that the proposed dataset is, in terms of scene contents and acquisition conditions (camera distance, lighting, image preprocessing), close to many practical applications (e.g., food and materials inspection, forensic detection, medical imaging). While individual papers introducing DNN architectures (e.g., [24] or [11]) present comparisons to a selection of reference algorithms, the authors are aware of only two studies [8,19] that present a broad comparison and discussion of methods, both focus on remote sensing data. Our study extends that work by discussing the performance of networks in a different, ‘local’ sensing.
- We compare the performance of transductive and inductive hyperspectral classification. The inductive classification scenario is much less investigated, while at the same time it is both more difficult and relevant for practical application.
Related Work
2. Methods
2.1. Multilayer Perceptron
2.2. Deep Recurrent Neural Network
2.3. Convolutional Neural Networks
- one-dimensional, taking into account only spectral vectors and ignoring spatial relationships between pixels;
- two- and three-dimensional, exploiting local neighbourhoods of hyperspectral pixels and spatial-spectral relationships.
2.3.1. 1D Convolutional Neural Network
2.3.2. 2D Convolutional Neural Network
2.3.3. 3D Convolutional Neural Network
2.3.4. Performance Measures
3. Results
3.1. Evaluation Dataset
3.2. Experimental Procedure
- Hyperspectral Transductive Classification (HTC) scenario treats every image in the dataset separately. In every experiment, the set of labelled pixels in the image is divided between the training and the test set, i.e., .
- Hyperspectral Inductive Classification (HIC) uses a pair of images: a set of labelled pixels from one image is used to train the classifier which is later tested on full set of labelled pixels from the second image.
3.3. Evaluation Procedure
3.3.1. Implementation
3.3.2. Evaluation Metrics
3.4. Experimental Results
3.5. Discussion
3.5.1. Performance of DNNs in the HTC and HIC Scenarios
3.5.2. Evaluation of Tested Networks
- RNN [14]—In the reference paper [19] results of this architecture were average among tested methods. In our HIC scenario, this architecture achieved one of the best average accuracies. Interestingly, it performed particularly well for F(·)→E(·) scenarios. On the other hand, in simpler scenarios such as e.g., F(2)↔F(2k) it was on average less accurate than other models. In some experiments e.g., HTC scenario F(1) or E(1) its results had a large standard deviation. Analysis of per-class scores (see Table 3, Table 4, Table 5 and Table 6) reveals that the model made significant errors for the artificial blood class.
- 3D CNN [49]—Similarly to RNN [14], while not exceptional in [19], in our HIC scenario this architecture achieved one of the highest average accuracies among tested models. In addition, it also scored high in the F(2)↔F(2k) classification scenarios. Analysing per-class scores, on average the architecture made the smallest errors when compared with other DNN models.
- 3D CNN [11]—In the reference paper [19] this architecture achieved the best result for two of the three tested datasets. It scored high in our HTC scenario, in particular for the E(21) image that was challenging for most of the tested models. In the HIC scenario its results were on par with the rest of architectures. However, it had the highest per-class errors in this scenario among tested models (see Table 5 and Table 6). The training of this architecture was sometimes unstable which resulted in the classifier that assigned all pixels into the one class. Our evaluation procedure described in Section 3.3.2 allowed us to detect and remove these outliers. In the HIC scenario, we observed that the model had significant problems when classifying examples from the tomato concentrate and ketchup classes. We also observed relatively high standard deviation in the HIC scenario.
- 2D CNN [24]—The results of this network were average among tested models, although in some cases, e.g., F(2k)→E(7), we observed relatively high standard deviation in the HIC scenario.
- 1D CNN [10]—In the reference paper [19] this architecture achieved one of the highest classification accuracies. However, in HIC scenario, it performed worse than other models on average in F(·)→E(·) scenarios and also had the highest per-class errors (see Table 6). We also noticed that in some of the HTC scenarios, namely E(7) and E(21), its results were also lower than average.
- MLP—Despite its simplicity, MLP achieved competitive results in our experiments, and in some scenarios e.g., HTC E(7), it outperformed other models. This seems consistent with results in [19] or [4]. It suggests that a relatively simple architecture can often compete with more advanced convolutional neural networks.
3.5.3. Hyperspectral Blood Stains Classification
4. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A. Running Times of Methods
SVM | MLP | 1D CNN [10] | 2D CNN [24] | 3D CNN [11] | 3D CNN [49] | RNN [14] | ||
---|---|---|---|---|---|---|---|---|
F(1) | training: | |||||||
test: | ||||||||
F(1a) | training: | |||||||
test: | ||||||||
training: | ||||||||
F(2) | test (HTC): | |||||||
test (HIC): | ||||||||
training: | ||||||||
F(2k) | test (HTC): | |||||||
test (HIC): | ||||||||
F(7) | training: | |||||||
test: | ||||||||
F(21) | training: | |||||||
test: | ||||||||
training: | ||||||||
E(1) | test (HTC): | |||||||
test (HIC): | ||||||||
training: | ||||||||
E(7) | test (HTC): | |||||||
test (HTC): | ||||||||
training: | ||||||||
E(21) | test (HTC): | |||||||
test (HIC): |
Appendix B. Ensemble Methods
Scenario | Min Diversity | Med Diversity | Max Diversity | Majority Voting | Position |
---|---|---|---|---|---|
F(1) | 0.0024 | 0.0099 | 0.0215 | 0.9983 | 1 |
F(1a) | 0.0028 | 0.0065 | 0.0104 | 0.998 | 3 |
F(2) | 0.0015 | 0.0036 | 0.0085 | 0.9986 | 2 |
F(2k) | 0.0021 | 0.032 | 0.219 | 0.9984 | 3 |
F(7) | 0.0021 | 0.0077 | 0.0154 | 0.9984 | 2 |
F(21) | 0.0017 | 0.0046 | 0.0161 | 0.9984 | 2 |
E(1) | 0.0617 | 0.1529 | 0.276 | 0.9534 | 1 |
E(7) | 0.0751 | 0.2079 | 0.3728 | 0.9062 | 1 |
E(21) | 0.0951 | 0.2083 | 0.3441 | 0.9077 | 1 |
F(1)→E(1) | 0.1413 | 0.2481 | 0.4669 | 0.5739 | 4 |
F(1a)→E(1) | 0.1583 | 0.2501 | 0.4868 | 0.6556 | 3 |
F(2)→E(7) | 0.1563 | 0.2829 | 0.4616 | 0.6005 | 3 |
F(2k)→E(7) | 0.198 | 0.2988 | 0.5253 | 0.5443 | 4 |
F(7)→E(7) | 0.1263 | 0.2684 | 0.4235 | 0.5662 | 5 |
F(21)→E(21) | 0.1701 | 0.2981 | 0.4661 | 0.5279 | 4 |
F(2)→F(2k) | 0.0143 | 0.0971 | 0.2783 | 0.9813 | 2 |
F(2k)→F(2) | 0.0084 | 0.0708 | 0.3257 | 0.9943 | 2 |
References
- Scafutto, R.D.M.; de Souza Filho, C.R.; de Oliveira, W.J. Hyperspectral remote sensing detection of petroleum hydrocarbons in mixtures with mineral substrates: Implications for onshore exploration and monitoring. ISPRS J. Photogramm. Remote Sens. 2017, 128, 146–157. [Google Scholar] [CrossRef]
- Thenkabail, P.S.; Mariotto, I.; Gumma, M.K.; Middleton, E.M.; Landis, D.R.; Huemmrich, K.F. Selection of hyperspectral narrowbands (HNBs) and composition of hyperspectral twoband vegetation indices (HVIs) for biophysical characterization and discrimination of crop types using field reflectance and Hyperion/EO-1 data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 427–439. [Google Scholar] [CrossRef] [Green Version]
- Li, J.; Huang, W.; Tian, X.; Wang, C.; Fan, S.; Zhao, C. Fast detection and visualization of early decay in citrus using Vis-NIR hyperspectral imaging. Comput. Electron. Agric. 2016, 127, 582–592. [Google Scholar] [CrossRef]
- Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A. Advanced spectral classifiers for hyperspectral images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef] [Green Version]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with Support Vector Machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
- Benediktsson, J.A.; Ghamisi, P. Spectral-Spatial Classification of Hyperspectral Remote Sensing Images; Artech House: Norwood, MA, USA, 2015. [Google Scholar]
- Romaszewski, M.; Głomb, P.; Cholewa, M. Semi-supervised hyperspectral classification from a small number of training samples using a co-training approach. ISPRS J. Photogramm. Remote Sens. 2016, 121, 60–76. [Google Scholar] [CrossRef]
- Paoletti, M.E.; Haut, J.M.; Plaza, J.; Plaza, A. Deep learning classifiers for hyperspectral imaging: A review. ISPRS J. Photogramm. Remote Sens. 2019, 158, 279–317. [Google Scholar] [CrossRef]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018. [Google Scholar] [CrossRef]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sens. 2015, 2015. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef] [Green Version]
- Boulch, A.; Audebert, N.; Dubucq, D. Autoencodeurs Pour la Visualisation D’images Hyperspectrales; XXV Colloque Gretsi: Juan-les-Pins, France, 2017. [Google Scholar]
- Liu, B.; Yu, X.; Zhang, P.; Tan, X.; Yu, A.; Xue, Z. A semi-supervised convolutional neural network for hyperspectral image classification. Remote Sens. Lett. 2017, 8, 839–848. [Google Scholar] [CrossRef]
- Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar] [CrossRef] [Green Version]
- Zadora, G.; Menżyk, A. In the pursuit of the holy grail of forensic science-spectroscopic studies on the estimation of time since deposition of bloodstains. TrAC Trends Anal. Chem. 2018, 105, 137–165. [Google Scholar] [CrossRef]
- Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef]
- Yang, J.; Mathew, J.J.; Dube, R.R.; Messinger, D.W. Spectral feature characterization methods for blood stain detection in crime scene backgrounds. In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXII; International Society for Optics and Photonics: Bellingham, WA, USA, 2016; Volume 9840, p. 98400E. [Google Scholar]
- Edelman, G.; Manti, V.; van Ruth, S.M.; van Leeuwen, T.; Aalders, M. Identification and age estimation of blood stains on colored backgrounds by near infrared spectroscopy. Forensic Sci. Int. 2012, 220, 239–244. [Google Scholar] [CrossRef]
- Audebert, N.; Le Saux, B.; Lefevre, S. Deep Learning for Classification of Hyperspectral Data: A Comparative Review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 159–173. [Google Scholar] [CrossRef] [Green Version]
- Vapnik, V.; Sterin, A.M. On structural risk minimization or overall risk in a problem of pattern recognition. Autom. Remote Control 1977, 10, 1495–1503. [Google Scholar]
- Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef] [Green Version]
- Romaszewski, M.; Głomb, P.; Sochan, A.; Cholewa, M. A Dataset for Evaluating Blood Detection in Hyperspectral Images. arXiv 2020, arXiv:2008.10254. [Google Scholar]
- Lee, H.; Kwon, H. Going Deeper with Contextual CNN for Hyperspectral Image Classification. IEEE Trans. Image Process. 2017, 26, 4843–4855. [Google Scholar] [CrossRef] [Green Version]
- Edelman, G.J.; Gaston, E.; van Leeuwen, T.G.; Cullen, P.; Aalders, M.C. Hyperspectral imaging for non-contact analysis of forensic traces. Forensic Sci. Int. 2012, 223, 28–39. [Google Scholar] [CrossRef] [Green Version]
- Li, B.; Beveridge, P.; O’Hare, W.T.; Islam, M. The application of visible wavelength reflectance hyperspectral imaging for the detection and identification of blood stains. Sci. Justice 2014, 54, 432–438. [Google Scholar] [CrossRef]
- Cadd, S.; Li, B.; Beveridge, P.; O’Hare, W.T.; Campbell, A.; Islam, M. The non-contact detection and identification of blood stained fingerprints using visible wavelength hyperspectral imaging: Part II effectiveness on a range of substrates. Sci. Justice 2016, 56, 191–200. [Google Scholar] [CrossRef] [Green Version]
- Edelman, G.; Van Leeuwen, T.G.; Aalders, M.C. Hyperspectral imaging for the age estimation of blood stains at the crime scene. Forensic Sci. Int. 2012, 223, 72–77. [Google Scholar] [CrossRef]
- Aalders, M.; Wilk, L. Investigating the Age of Blood Traces: How Close Are We to Finding the Holy Grail of Forensic Science? In Emerging Technologies for the Analysis of Forensic Traces; Springer: Cham, Switzerland, 2019; pp. 109–128. [Google Scholar] [CrossRef]
- Cholewa, M.; Głomb, P.; Romaszewski, M. A spatial-spectral disagreement-based sample selection with an application to hyperspectral data classification. IEEE Geosci. Remote Sens. Lett. 2019, 16, 467–471. [Google Scholar] [CrossRef]
- Chunhui, Z.; Bing, G.; Lejun, Z.; Xiaoqing, W. Classification of Hyperspectral Imagery based on spectral gradient, SVM and spatial random forest. Infrared Phys. Technol. 2018, 95, 61–69. [Google Scholar] [CrossRef]
- Li, L.; Wang, C.; Li, W.; Chen, J. Hyperspectral image classification by AdaBoost weighted composite kernel extreme learning machines. Neurocomputing 2018, 275, 1725–1733. [Google Scholar] [CrossRef]
- Kolesnikov, A.; Beyer, L.; Zhai, X.; Puigcerver, J.; Yung, J.; Gelly, S.; Houlsby, N. Big Transfer (BiT): General Visual Representation Learning. arXiv 2019, arXiv:1912.11370. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Cai, Z.; Fan, Q.; Feris, R.S.; Vasconcelos, N. A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection. In Computer Vision—ECCV 2016, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham, Switzerland, 2016. [Google Scholar] [CrossRef] [Green Version]
- Zhang, H.; Li, Y.; Zhang, Y.; Shen, Q. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network. Remote Sens. Lett. 2017, 8, 438–447. [Google Scholar] [CrossRef] [Green Version]
- Mohan, A.; Venkatesan, M. HybridCNN based hyperspectral image classification using multiscale spatiospectral features. Infrared Phys. Technol. 2020, 108, 103326. [Google Scholar] [CrossRef]
- Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
- Pan, B.; Shi, Z.; Xu, X. MugNet: Deep learning for hyperspectral image classification using limited samples. ISPRS J. Photogramm. Remote Sens. 2018, 145, 108–119. [Google Scholar] [CrossRef]
- Cao, F.; Guo, W. Deep hybrid dilated residual networks for hyperspectral image classification. Neurocomputing 2020, 384, 170–181. [Google Scholar] [CrossRef]
- Okwuashi, O.; Ndehedehe, C.E. Deep support vector machine for hyperspectral image classification. Pattern Recognit. 2020, 103, 107298. [Google Scholar] [CrossRef]
- Cao, X.; Ge, Y.; Li, R.; Zhao, J.; Jiao, L. Hyperspectral imagery classification with deep metric learning. Neurocomputing 2019, 356, 217–227. [Google Scholar] [CrossRef]
- Sugiyama, M.; Krauledat, M.; Müller, K.R. Covariate Shift Adaptation by Importance Weighted Cross Validation. J. Mach. Learn. Res. 2007, 8, 985–1005. [Google Scholar]
- Tsuboi, Y.; Kashima, H.; Hido, S.; Bickel, S.; Sugiyama, M. Direct Density Ratio Estimation for Large-scale Covariate Shift Adaptation. In Proceedings of the 2008 SIAM International Conference on Data Mining, Atlanta, GA, USA, 24–26 April 2008; pp. 443–454. [Google Scholar] [CrossRef] [Green Version]
- Kandaswamy, C.; Silva, L.M.; Alexandre, L.A.; Santos, J.M.; de Sá, J.M. Improving Deep Neural Network Performance by Reusing Features Trained with Transductive Transference. In Artificial Neural Networks and Machine Learning—ICANN 2014; Springer: Cham, Switzerland, 2014; pp. 265–272. [Google Scholar]
- Bianchini, M.; Belahcen, A.; Scarselli, F. A Comparative Study of Inductive and Transductive Learning with Feedforward Neural Networks. In AI*IA 2016 Advances in Artificial Intelligence; Adorni, G., Cagnoni, S., Gori, M., Maratea, M., Eds.; Springer: Cham, Switzerland, 2016; pp. 283–293. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 8 October 2020).
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
- Ben Hamida, A.; Benoit, A.; Lambert, P.; Ben Amar, C. 3-D Deep Learning Approach for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4420–4434. [Google Scholar] [CrossRef] [Green Version]
- Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
- Scholkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 8–14 December 2019; pp. 8024–8035. [Google Scholar]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Harris, C.R.; Millman, K.J.; van der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
- Hunter, J.D. Matplotlib: A 2D graphics environment. Comput. Sci. Eng. 2007, 9, 90–95. [Google Scholar] [CrossRef]
- Van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Skjelvareid, M.H.; Heia, K.; Olsen, S.H.; Stormo, S.K. Detection of blood in fish muscle by constrained spectral unmixing of hyperspectral images. J. Food Eng. 2017, 212, 252–261. [Google Scholar] [CrossRef]
- Haut, J.M.; Paoletti, M.E.; Plaza, J.; Li, J.; Plaza, A. Active learning with convolutional neural networks for hyperspectral image classification using a new bayesian approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6440–6461. [Google Scholar] [CrossRef]
- Perez, F.; Avila, S.; Valle, E. Solo or Ensemble? Choosing a CNN Architecture for Melanoma Classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Long Beach, CA, USA, 17 June 2019. [Google Scholar]
- Nanni, L.; Brahnam, S.; Ghidoni, S.; Maguolo, G. General Purpose (GenP) Bioimage Ensemble of Handcrafted and Learned Features with Data Augmentation. arXiv 2019, arXiv:1904.08084. [Google Scholar]
- Wang, A.; Wang, Y.; Chen, Y. Hyperspectral image classification based on convolutional neural network and random forest. Remote Sens. Lett. 2019, 10, 1086–1094. [Google Scholar] [CrossRef]
SVM | MLP | 1D CNN [10] | 2D CNN [24] | 3D CNN [11] | 3D CNN [49] | RNN [14] | ||
---|---|---|---|---|---|---|---|---|
OA: | ||||||||
F(1) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(1a) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(2) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(2k) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(7) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(21) | AA: | |||||||
: | ||||||||
OA: | ||||||||
E(1) | AA: | |||||||
: | ||||||||
OA: | ||||||||
E(7) | AA: | |||||||
: | ||||||||
OA: | ||||||||
E(21) | AA: | |||||||
: |
SVM | MLP | 1D CNN [10] | 2D CNN [24] | 3D CNN [11] | 3D CNN [49] | RNN [14] | ||
---|---|---|---|---|---|---|---|---|
OA: | ||||||||
F(1)→E(1) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(1a)→E(1) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(2)→E(7) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(2k)→E(7) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(7)→E(7) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(21)→E(21) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(2)→F(2k) | AA: | |||||||
: | ||||||||
OA: | ||||||||
F(2k)→F(2) | AA: | |||||||
: |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Książek, K.; Romaszewski, M.; Głomb, P.; Grabowski, B.; Cholewa, M. Blood Stain Classification with Hyperspectral Imaging and Deep Neural Networks. Sensors 2020, 20, 6666. https://doi.org/10.3390/s20226666
Książek K, Romaszewski M, Głomb P, Grabowski B, Cholewa M. Blood Stain Classification with Hyperspectral Imaging and Deep Neural Networks. Sensors. 2020; 20(22):6666. https://doi.org/10.3390/s20226666
Chicago/Turabian StyleKsiążek, Kamil, Michał Romaszewski, Przemysław Głomb, Bartosz Grabowski, and Michał Cholewa. 2020. "Blood Stain Classification with Hyperspectral Imaging and Deep Neural Networks" Sensors 20, no. 22: 6666. https://doi.org/10.3390/s20226666