Next Article in Journal
Effects of Virtual Rehabilitation Training on Post-Stroke Executive and Praxis Skills and Depression Symptoms: A Quasi-Randomised Clinical Trial
Previous Article in Journal
Ultrasound Assessment of Sarcopenia in Alcoholic Liver Disease
Previous Article in Special Issue
Diagnosing Progression in Glioblastoma—Tackling a Neuro-Oncology Problem Using Artificial-Intelligence-Derived Volumetric Change over Time on Magnetic Resonance Imaging to Examine Progression-Free Survival in Glioblastoma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Technological Frontiers in Brain Cancer: A Systematic Review and Meta-Analysis of Hyperspectral Imaging in Computer-Aided Diagnosis Systems

1
Department of Radiology, Ditmanson Medical Foundation Chia-yi Christian Hospital, Chia Yi 60002, Taiwan
2
Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan
3
Neurology Division, Department of Internal Medicine, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st. Rd., Lingya District, Kaohsiung City 80284, Taiwan
4
Faculty of Allied Health Sciences, The University of Lahore, 1-Km Defense Road, Lahore 54590, Punjab, Pakistan
5
Department of Medical Research, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, No. 2, Minsheng Road, Dalin, Chia Yi 62247, Taiwan
6
Department of Technology Development, Hitspectra Intelligent Technology Co., Ltd., 8F.11-1, No. 25, Chenggong 2nd Rd., Qianzhen Dist., Kaohsiung City 80661, Taiwan
*
Authors to whom correspondence should be addressed.
Diagnostics 2024, 14(17), 1888; https://doi.org/10.3390/diagnostics14171888
Submission received: 8 July 2024 / Revised: 19 August 2024 / Accepted: 23 August 2024 / Published: 28 August 2024
(This article belongs to the Special Issue Artificial Intelligence in Brain Cancer)

Abstract

:
Brain cancer is a substantial factor in the mortality associated with cancer, presenting difficulties in the timely identification of the disease. The precision of diagnoses is significantly dependent on the proficiency of radiologists and neurologists. Although there is potential for early detection with computer-aided diagnosis (CAD) algorithms, the majority of current research is hindered by its modest sample sizes. This meta-analysis aims to comprehensively assess the diagnostic test accuracy (DTA) of computer-aided design (CAD) models specifically designed for the detection of brain cancer utilizing hyperspectral (HSI) technology. We employ Quadas-2 criteria to choose seven papers and classify the proposed methodologies according to the artificial intelligence method, cancer type, and publication year. In order to evaluate heterogeneity and diagnostic performance, we utilize Deeks’ funnel plot, the forest plot, and accuracy charts. The results of our research suggest that there is no notable variation among the investigations. The CAD techniques that have been examined exhibit a notable level of precision in the automated detection of brain cancer. However, the absence of external validation hinders their potential implementation in real-time clinical settings. This highlights the necessity for additional studies in order to authenticate the CAD models for wider clinical applicability.

1. Introduction

According to cancer statistics from 2016 [1], brain cancer is considered the leading cause of mortality around the world. Although significant breakthroughs have been made in the treatment of many other types of cancer [2,3], currently, brain tumors are the leading cause of cancer-related fatalities among children aged 0–14 [4]. The fatality rate due to brain cancer is particularly high in Asia [5,6]. Characterized by uncontrolled tissue growth in the brain, brain tumors can severely impair the body’s normal functioning [5]. There are two main types of brain tumors [7] benign tumors [8,9], which are non-cancerous and generally do not invade nearby tissues or spread, and malignant tumors [10,11], which are cancerous, grow rapidly, and may spread to other parts of the body. Malignant tumors, in turn, can be classified as primary tumors, originating within the brain, or secondary tumors that commonly result from the spread of cancer from other parts of the body [12]. Brain cancer remains difficult to detect, and treating intracranial cancer at its earliest stages [13]. Various types of tumors can produce different symptoms [14]. The proximity of certain brain tumors to critical anatomical structures further complicates diagnosis and treatment [15]. Most often, the first symptom is headache, which can be mild, severe, or come and go. Other symptoms include seizures, loss of balance, nausea, vomiting, disturbed vision or smell, or paralysis in parts of the body.
The diagnosis of brain tumor is the segmentation of tumor regions and the classification of the tumor [16,17], using different methods, which can either be invasive or non-invasive. Biopsy [18,19], the invasive approach, involves collecting a sample of the tumor through incision, and provides crucial information about the tumor’s histological type, classification, and grade [20]. It is not only time consuming and invasive, but also subjective and inconsistent [21]. Modern imaging techniques such as Computed Tomography (CT) [22,23], Magnetic Resonance Imaging (MRI) [24,25], and Positron Emission Tomography (PET) [26,27] are much faster and safer approaches to non-invasive techniques. Despite their efficacy, these imaging scans do not easily allow for the precise quantification of tumor volume due to the accumulation of extracellular water (edema) around the tumor, making accurate discrimination of tumor margins challenging [28].
Based on the statistics from the colorectal cancer center in the United States [29], the incidence of brain cancer and related mortality is consistently rising among individuals aged 4 to 50 [30]. Early diagnosis plays a crucial role in mitigating the severity of brain cancer by facilitating the appropriate treatment. Image processing is a predominant method for identifying brain cancer, with various approaches employing Artificial Neural Networks (ANNs), such as the grey-level co-occurrence matrix (GLCM)-based ANNs, and Residual Neural Network (ResNet) models based on Convolutional Neural Networks (CNNs). Many of the advancements have been made in cancer detection using traditional machine-learning models such as linear discriminant analysis [31], quadratic discriminant analysis [32], support vector machine (SVM) [33,34], decision trees and naïve Bayes [35], k-nearest neighbor’s algorithm [36], k-means [37,38], random forests (RFs) [39], maximum likelihood [40], minimum spanning forest [41], gaussian mixture models [42], semantic texton forest [43], ANNs [44,45], and so on. There are plenty of studies that indicate the superiority of machine-learning algorithms over conventional methods [46,47]. However, challenges arise with these methods due to the substantial dataset requirements for training and issues with handling input transformations. A study by Ewan Gray et al. [48] focused on early economic evaluation for developing a spectroscopic liquid biopsy to detect brain cancer. The research involved 433 blood samples, encompassing cases both with and without brain tumors. Utilizing a fivefold cross-validation strategy for accuracy assessment in the development data, the study reported a sensitivity of 92.8% and a specificity of 91.5%. Yong-Eun Lee Koo et al. investigated several types of nanoparticles for imaging and treating brain cancer [49]. The magnetic nanoparticles based on iron oxide exhibit promising potential as MRI contrast agents in the brain, based on in vitro cellular studies, in vivo animal studies, and human studies. Yan Zhou et al. employed resonance raman (RR) spectroscopy with an excitation wavelength of 532 nm to distinguish between normal brain tissues, cancerous brain tumors, and benign brain lesions. The statistical analysis of RR data initially yielded a diagnostic sensitivity of 90.9% and specificity of 100% [50]. El-Sayed A. El-Dahshan et al. developed a robust classification method for the efficient and automated classification of normal/abnormal brain images [51]. Their algorithm, implemented on a dataset of 101 brain MRI images (14 normal and 87 abnormal), demonstrated an impressive classification accuracy of 99%, sensitivity of 92%, and specificity of 100%.
Despite using the spectroscopic liquid biopsy or developing a computer-aided design (CAD) tool for the detection of brain cancer, these approaches have some major limitations. Hyperspectral imaging (HSI) expands the amount of data gathered beyond what is visible to the human eye, in contrast to traditional RGB (red, green, and blue) imaging. On the other hand, RGB is only limited to recording three diffuse Gaussian spectral bands in the visible spectrum (380–780 nm) [52]. One of HSI’s advantages over other diagnostic technologies is that it is a fully non-invasive, non-contact, non-ionizing, and label-free sensing method [53]. Further experiments and evaluation are therefore desirable to establish whether the proposed approaches have generic applications, while extracting more efficient features and increasing the training dataset. However, the use of HSI is becoming more common around the globe for the diagnosis of cancer. HSI increases the amount of information by capturing more data in many contiguous and narrow spectral bands, over a wide spectral range, and reconstructs the reflectance spectrum for every pixel of the image [54]. HSI has the ability to provide information about different tissue components and their spatial distribution, simultaneously, by measuring the absorption and reflection of light at different wavelengths [55]. Hyperspectral cameras cover different spectral ranges, depending on the type of sensor used. Charge-Coupled Device (CCD) sensors cover the Visible and Near-Infrared (VNIR) range from 400 to 1000 nm, while Indium Gallium Arsenide (InGaAs) sensors can capture HS images in the Near-Infrared (NIR) range, between 900 and 1700 nm [56].
HSI is a technology that combines conventional imaging and spectroscopy to simultaneously obtain the spatial and the spectral information of an object [57]. HSI sensors generate a three-dimensional (3D) data structure, called an HS cube. Spatial information is contained in the first two dimensions, while the third dimension encompasses the spectral information [58]. HS cameras are mostly classified into four different setups, depending on the techniques employed to obtain the HS cube: whiskbroom (point-scanning) cameras, push-broom (line-scanning) cameras, cameras based on spectral scanning (area-scanning or plane-scanning), and snapshot (single shot) cameras [59].
HSI technology has been proved crucial in the field of medicine throughout the years, with various improvements made so that it would effectively contribute to different fields, not only limited to medicine. HS has found comprehensive applications in face recognition [60], remote sensing [61], medical diagnosis [62], archaeology and art conservation [63], vegetation and water resource control, food quality and safety control, forensic medicine [64,65], crime scene detection [66,67], environmental sensing [68], counterfeit hologram [69], glass film detection [70], video capsule endoscopy [71], energy transmission systems, ink mismatch detection in unbalanced clusters [72], lunar penetrating radar [73], security holograms [74], air pollution [75], diabetic retinopathy [76], low-cost holograms [77], counterfeit currency [78], and biomedical areas [79,80]. In order to take advantage of both imaging instruments and offer more helpful information for illness diagnosis and treatment, the HSI system has been combined with many other techniques, such as laparoscope [81], colposcope [82], fundus camera [83,84], and Raman scattering [85]. The most popular combination is with a microscope [86,87], or with a confocal microscope [88], which has shown to be helpful in the investigation of the spectral properties of tissues. Table 1 shows research studies on cancer detection diagnosis using HSI technology in recent years. Figure 1 shows new research on diagnosing brain cancer by combining the advantages and disadvantages of HSI and CAD technology. It presents a thorough summary of the process and essential elements in identifying and diagnosing brain cancer through the use of modern imaging methods and CAD systems. The components symbolize different phases and essential parameters involved in the diagnostic procedure. It highlights the significance of CT and MRI scans as primary imaging modalities for identifying brain disorders. Furthermore, it emphasizes the crucial significance of radiologists and neurologists in analyzing and interpreting these images, underscoring their specialized knowledge and skill in the diagnosis procedure. The presence of a patient receiving an MRI scan exemplifies the practical implementation of various imaging techniques in healthcare environments. In addition, the utilization of AI and HIS technology is necessary for early detection. These sophisticated instruments are illustrated via depictions of neural networks and spectral analysis, demonstrating their function in analyzing intricate data and assisting in the accurate identification of malignant tissues. The workflow demonstrates the interrelated stages, starting with image capture and expert analysis, followed by data manipulation and confirmation of accuracy, highlighting the requirement for external verification and the significance of attaining high sensitivity and specificity in CAD models. Therefore, in this study, a diagnostic meta-analysis has been conducted on CAD studies that diagnose brain cancers using only HSI. A complete QUADAS quality assessment was performed to filter out only quality studies, which were then used to perform Deek’s funnel plot analysis, accuracy analysis, and forest plots.

2. Materials and Methods

This section covers the procedures that were involved in obtaining the relevant studies for this review, in particular, studies related to brain cancer detection using HSI technology. The criteria used to choose relevant studies are presented in this section, both for inclusion and exclusion. This review was performed in accordance to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [95]. The overall process of the study is shown in Figure 2. The PRISMA flowchart for the study selection can be seen in Supplement S1 Figure S1.

2.1. Study Selection Criteria

The purpose of this review is to illuminate the developments in brain cancer diagnosis and detection using HSI, highlighting the system’s strengths and weaknesses in this regard. Studies that satisfy the specified inclusion criteria are the main focus of this review: (1) studies with conclusive numerical results such as dataset, sensitivity, accuracy, precision, and area under the curve (AUC); (2) studies focusing on brain cancer detection using HSI; (3) studies published within the past six years; (4) the publication journal must be in the first quartile and have an H-index greater than 50; (5) studies having prospective and retrospective design; and (6) the studies must be written in English. Furthermore, this review excludes studies that meet the following exclusion criteria: (1) studies with small-scale data investigations; (2) studies that are narrative reviews, systematic reviews, and meta-analyses; (3) comments, proceedings, or study protocols; and (4) conference papers. In this study, two authors introduce the Quality Assessment of Diagnostic Accuracy Studies Version 2 (QUADAS-2) to evaluate the quality of the methodologies in the articles being examined. Bias assessment in patient selection and during index test is included in QUADAS-2 [96]. Additionally, it evaluates the risk bias and the standard of reference in terms of timing and flow as key domains. The applicability assessment was performed in accordance with the PRISMA guidelines, along with the bias assessment. Two authors (R.K. and F.A.) worked independently to perform the QUADAS-2 report, and the results are shown in Figure 3.
Figure 2. Overall flowchart of the study [32,97,98,99,100,101,102].
Figure 2. Overall flowchart of the study [32,97,98,99,100,101,102].
Diagnostics 14 01888 g002

2.2. QUADAS-2 Results

Table 2 compiles the QUADAS-2 results for the seven studies that are part of this review. It includes the applicability concerns and the degree of bias risk in the studies. Every study was examined for bias risk according to flow and timing, patient selection, reference standard, and index test; applicability issues were examined under patient selection, reference standard, and index test (for the QUADAS-2 domain plot, refer to Supplement S1 Figure S2).

3. Results

This section presents the findings of the review, along with the brief explanation of each study and the clinical characteristics that were noted. The numerical findings from each study are also included in this section. This section incorporates the comparisons of the results in terms of sensitivity, specificity, and accuracy.

3.1. Clinical Features Observed in the Studies

The studies chosen for this article analysis examine the performance of various HSI methods for brain cancer detection and diagnosis. The studies selected for this article analysis briefly outline the performance of various HSI methods for brain cancer detection and diagnosis, as shown in Table 3. Additionally, utilizing subgrouping and meta-analysis, the accuracy, sensitivity, specificity, and area under the curve (AUC) in identifying and categorizing brain cancer lesions and neoplasms from each article were noted. The various HSI techniques employed in the articles were evaluated and contrasted with these indicators.
Martinez et al. used HSI instrumentation to generate the in vivo HSI brain cancer image database. The database employed in this study was composed of 26 HS images from adult patients, and an SVM classifier was used to compare the results. In the 214 and 128 band ranges, the accuracy results stabilized at the value of 80%, with a sensitivity of 55.77% and a specificity of 85.37% [97]. This study included a total of only 26 HS images, which could potentially require external validation before being implemented in a clinical setting. In another study, Urbanos et al. used the SVM, RF, and CNNs across the 660 to 950 nm range of the measured spectrum to classify in vivo brain tissue in thirteen patients with high-grade glioma pathology. The authors found that the SVM achieved 76.5% accuracy, 26% sensitivity, and 91% specificity, on average. On the other hand, RF achieved 82.5% accuracy, 48.5% sensitivity, 99% specificity, while the CNN achieved 82.5% accuracy, 48.5% sensitivity, and 99% specificity. The CNN model had the best result out of the three models used in the study; however, the sensitivity achieved was only around 48.5%, which needs to be significantly increased to be successfully implemented in hospitals. Taking into account the contribution of each band to the global accuracy, the results obtained were 3.81 times higher than the best results of the state-of-the-art models [98].
Ortega et al. processed the data with three different classifiers: SVMs, ANNs, and RF to classify and identify the tissue samples. Twenty-one diagnosed pathological slides obtained from ten different patients were used in this research work. The ANN achieved the best average overall accuracy of 78.02%, the SVM achieved the best average sensitivity of 75.69%, and RF achieved the best average specificity of 79.33%, demonstrating that none of the analyzed classifiers is optimal for all patients [99]. Therefore, the best way to use this method in a hospital would be to combine all three models and to classify with a threshold setting for each of the models. Manni et al. proposed a comparison study using a 2D CNN and two conventional classification methods (the SVM, and the SVM classifier combined with the 3D–2D hybrid CNN for feature extraction). Moreover, the method was compared with the 1D CNN, and a sensitivity of 68%, specificity of 98%, and AUC of 70% for tumor tissue classification were achieved [100]. The study had an impressive specificity of 98%; however, if the sensitivity can be increased, then the study can be implemented in real time. A study conducted by Ortega et al. proposed the use of CNNs for the classification of hematoxylin and eosin (H&E)-stained brain tumor samples. The instrumentation employed in their study consisted of an HS camera coupled to a conventional light microscope, working in the VNIR spectral range from 400 to 1000 nm, with a spectral resolution of 2.8 nm, sampling 826 spectral channels and 1004 spatial pixels. It was concluded that the classification results of HSI provided a more balanced sensitivity of 88% and specificity of 77%, which is the goal for clinical applications, improving the average sensitivity and specificity by 7% and 9% with respect to the RGB imaging results with 81% sensitivity and a specificity of 68% [101].
Fabelo et al. described a methodology using a set of five in vivo brain surface HS images to develop a surgical tool for identifying and delineating the boundaries of the tumor tissue using HS images. The hyperspectral acquisition system employed in their work is called the HELICoiD demonstrator. As a result, the SVM classifier offered specificity and sensitivity results of 99.62% and 99.91%, obtaining overall accuracy results of 99.72%. Even though the overall study achieved impressive results, an external validation is required to actually assess the clinical implantation procedure. In another study, Fabelo et al. employed a VNIR push broom camera to obtain HS images in the spectral range comprised between 400 and 1000 nm. In order to evaluate the deep learning methods against traditional SVM-based machine-learning algorithms, he used the 2D-CNN, 1D-DNN and 1D-CNN methods. The 1D-DNN achieved the best results, obtaining 94% accuracy and 88% sensitivity.

3.2. Meta-Analysis of the Studies

Table 4 shows the meta-analysis and subgroup analysis for brain cancer detection. Among the seven studies included in this review, the average obtained accuracy, sensitivity, specificity and AUC were 78.96%, 62.43%, 90.05%, and 81.06%.
While the CNN had the best performance, with 80.34% accuracy, 62.41% sensitivity, 89.84% specificity, and 81.8% AUC, compared to other machine learning, the dataset of six studies showed that the SVM achieved 76.22% accuracy, 58.23% sensitivity, 90.18% specificity, and 87.6% AUC, while the ANN achieved 78.02% accuracy, 75.44% sensitivity, and 77.03% specificity. However, 11 studies used different SVM methods, making the SVM the most used method for the detection and diagnosis of brain cancer according to this evaluation. Himar Fabelo et al. achieved the most promising results using the SVM, with an accuracy of 99.72%, sensitivity of 99.62%, and specificity of 99.91%.
In vivo and in vitro are techniques used by researchers to develop drugs or to study diseases. The term in vivo means research conducted on a living organism, and scientists have been studying the development of machine-learning algorithms using HS images of in vivo brain cancer for the identification of the brain tumor margins. On the other hand, in vitro means research conducted in a laboratory dish or test tube. Each type has benefits and drawbacks. Studies that used the in vitro method showed better results, with an accuracy of 79.93%, sensitivity of 84.81%, specificity of 79.46%, and AUC of 85.46%, as compared to studies that used the in vivo methods, which achieved an accuracy of 78.72%, sensitivity of 56.84%, specificity of 92.70%, and AUC of 85.46%.
High-grade gliomas and glioblastoma are two most prevalent kinds of brain cancer. Gliomas, arising from glial cells in the central nervous system of the adult brain, are the most common primary intracranial tumors and account for 70–80% of all brain tumors. Grade III gliomas and glioblastomas (GBMs) are considered the most aggressive and highly invasive, as they spread quickly to other parts of the brain. Regardless of all the aggressive treatments that include surgery combined with radiation, chemotherapy, and biological therapy, GBM tumors remain a big therapeutic challenge, with survival rates following diagnosis of 12 to 15 months, with less than 3 to 5% of people surviving longer than 5 years. Additionally, GBM tumors have very poor prognosis and are also highly vascular brain tumors. In GBM, there is an upregulation of several angiogenic receptors and factors that trigger angiogenesis-signaling pathways by either downregulating tumor suppressor genes or activating oncogenes. The images of GBM, which are more commonly used by the studies involved in this review, have 78.05% accuracy, 61.94% sensitivity, 89.61% specificity, and 85.46% AUC.
Moreover, an increasing trend was observed in accuracy, sensitivity, and specificity when the studies involved were observed based on their year of publication. This can be illustrated by comparing studies conducted before 2021, which had 78.05% accuracy, 61.94% sensitivity, 61% specificity, and 85.46% AUC, to studies from 2021 to 2023, which had 82.6% accuracy, 64.4% sensitivity, 91.8% specificity, and 93.05% AUC. It was noted that there were fewer studies after 2020, while continuous work was presented in the earlier years. Hence, progressive studies using HSI technology can be beneficial for improving its overall performance in cancer detection and diagnosis.

3.3. Subgroup Meta-Analysis

An accuracy graph was generated to visualize the accuracy of different CAD methods in brain cancer detection. Figure 4 shows the overall accuracy chart of different CAD methods used in the studies. The most commonly used method was the SVM, with several types of bands. The use of the SVM in brain cancer detection was the most dominant among the CAD models used in different studies. Consequently, the highest accuracy of 99.72% was achieved by Fabelo et al. through the use of the SVM. By contrast, the lowest accuracy of 53.8 was obtained through the use of band L3 of the SVM, as the CAD method in the investigation by Martinez et al.
Furthermore, Deek’s funnel plot was produced based on different classifications, such as machine-learning methods, vivo, cancer type, and published year in all classifications combined, as shown in Figure 5 (for a detailed Deek’s plot, refer to Figures S6–S9 for vivo, AI methods, cancer type, and published year, respectively). Deek’s funnel plot possesses diagnostic odds ratio of each study’s classification and the fraction of the root of each sample size (for sensitivity and specificity computations in the meta-regression, refer to Supplement S1 Table S1). Deek’s funnel plot obtained from this study provided no indication of heterogeneity or bias, with p = 0.31 for cancer type. However, Deek’s funnel plot showed heterogeneity, with p = 0.003 in all studies and p = 0.001 for the CAD methods (for the regression statics of cancer type, vivo, AI methods, publication years and all studies, refer to Supplement S1 Tables S2–S6; for the p-values for Deek’s funnel plot for seven cancer types, vivo, AI method, publication years and all studies, refer to Supplement S1, Tables S7–S11). Finally, forest plots of each CAD method and each study involved for sensitivity and specificity were generated under 95% level of confidence (for the sensitivity and specificity computations in the meta-regression, refer to Supplement S1 Table S1). Top left quadrant in Deek’s funnel plot usually represents studies with small sample sizes and large effect sizes. Studies with large sample sizes and large effect sizes lie in the top right quadrant, while studies with small sample sizes and effect sizes are present in the bottom left quadrant. The bottom right quadrant contains the studies with large sample sizes but small effect sizes. The studies in the top right quadrant suggest the best results; on the contrary, the studies in the bottom left quadrant are believed to be with limited statistical power or inclusive findings. Moreover, a meta-regression analysis was conducted to be able to compare the sensitivities and specificities of each data according to their publication year, cancer type, vivo and AI methods utilized. Figure 6 shows the univariable meta-regression of sensitivity and specificity under a 95% level of confidence interval (for more forest plots see Supplement S1 Figures S3–S5 for the forest plot of cancer types, AI types, and vivo types, respectively).

4. Discussion

The diagnostic accuracy of the studies that were carried out on the detection and diagnosis of brain cancer was found to be “very good” according to Youden’s index, considering the context in which the studies were undertaken. For the purpose of determining the diagnostic accuracy of ill and healthy populations, respectively, sensitivity and specificity at the ideal cut-off point associated with Youden’s index are significant measurements [103]. In accordance with particular DTA (diagnostic test accuracy) requirements, machine-learning algorithms such as the CNN have demonstrated encouraging outcomes in the process of processing medical imaging data for the purpose of cancer detection, including brain cancer [104].
The present clinical recommendations for the management and diagnosis of brain cancer mainly rely on the expertise and interpretation of imaging specialists. This is despite the fact that HSI has shown promising outcomes in the identification of neurological cancer. In the not-too-distant future, it is projected that the application of HSI technology for the identification of brain cancer will achieve broader awareness. In spite of the fact that specialists are hesitant about using them, CAD algorithms have diagnostic skills that are superior to those of standard machine-learning methods.
The guidelines for the identification of brain cancer using neuroimaging techniques have been implemented by the American Society of Neuroradiology (ASNR) [105]. When it comes to the identification of a wide range of brain malignancies, including gliomas and metastatic brain tumors, these criteria require a binding threshold performance that establishes high sensitivity and specificity. In spite of the fact that the overall sensitivity has demonstrated a wide range of values, the specificity requirements for the research that were included in this review were met by only a small number of publications. Even if it is difficult to achieve defined performance standards for brain cancer detection, the incorporation of modern approaches holds promise for improving early diagnosis and treatment outcomes. This is because these techniques have the potential to improve health outcomes.
Several limitations were observed in the research, despite the fact that the diagnostic performance was quite exceptional. The very small number of patients who participated in the studies is one of the limits of these studies. One of the studies that were included in this review had a dataset consisting of five hyperspectral images from five patients. One limitation is a very low dataset of patients involved in the studies. This is a relatively small number when compared to the study that was carried out by Fabelo et al., which had a sample size of thirty-six hyperspectral images from twenty-two different patients. From these data, more than three hundred thousand spectral signatures were associated with the patients. Due to the restricted dataset, there is an increased risk of sampling bias, which has the potential to distort the results and reduce the relevance of the findings. It is possible that this situation will pose a threat to the reliability of the results that were obtained in the study. This is because the dataset might not accurately represent the various features of the population. The fact that only a limited number of cancer types were included is another constraint that contributes to the exacerbation of this problem. This limitation reduces the comprehensiveness of the study. In addition to this, it presents difficulties in applying the findings of the study to a widespread variety of cancer kinds.
In their review analysis, Katharine et al. [106] included around five different categories of cancer research. An additional disadvantage is that there is a limited selection of machine-learning techniques that can be utilized. This may result in the risk of disregarding potentially better procedures that may yield more accurate results. Consequently, this must be avoided. In a comprehensive review of the function of deep learning in brain cancer detection and classification, Nazir et al. [107] highlighted all the machine-learning techniques that were utilized between the years 2015 and 2020. To add insult to injury, the limitation that is imposed by computational power constitutes a significant bottleneck when it comes to undertaking in-depth analysis. The ever-increasing complexity of machine-learning algorithms, in conjunction with the ever-increasing size of datasets, makes it necessary to make use of computational resources in order to facilitate efficient data processing and model training. This could result in an increase in the duration of the experiment, as well as in a reduction in the scalability of the analysis. It is possible to dramatically improve the quality and relevance of future studies in this field by addressing these restrictions through the use of tactics such as expanding the dataset, adding a wider range of cancer kinds, utilizing a variety of machine-learning methodologies, and increasing the amount of computer resources available.

5. Conclusions

This systematic review and meta-analysis has provided significant insights into the capabilities and limitations of hyperspectral imaging (HSI) technology in brain cancer detection. We observed that while HSI offers substantial promise in enhancing diagnostic accuracy, challenges remain in terms of limited sample sizes and the generalizability of the study results. To overcome these hurdles, we recommend a concerted effort towards conducting larger, multicentric studies that would enable a more robust statistical analysis and help validate the findings across different demographics and clinical environments. Moreover, integrating HSI with existing diagnostic frameworks could potentially streamline workflows and enhance the decision-making process in clinical settings. This integration should focus on harnessing advanced machine-learning algorithms that can efficiently process the complex data generated by HSI, thereby reducing the burden on clinicians and potentially leading to faster and more accurate diagnosis. In conclusion, while the journey of incorporating HSI into routine clinical practice is still at a nascent stage, our findings underscore its transformative potential in oncological diagnostics. Future research should aim at addressing the current limitations, fostering technological advancements, and ultimately, facilitating the adoption of HSI in clinical practice to improve the outcomes for patients with brain cancer.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics14171888/s1, Table S1. Sensitivity and specificity computations in the meta-regression; Table S2. Regression statistics (cancer type); Table S3. Regression statistics (vivo); Table S4. Regression statistics (AI methods); Table S5. Regression statistics (publication years); Table S6. Regression statistics (all studies); Table S7. p-value of Deeks’ funnel plot (cancer type); Table S8. p-value of Deeks’ funnel plot (vivo); Table S9. p-value of Deeks’ funnel plot (AI method); Table S10. p-value of Deeks’ funnel plot (publication years); Table S11. p-value of Deeks’ funnel plot (all studies); Figure S1. Search process flowchart; Figure S2. QUADAS-2 domain; Figure S3. Sensitivity and specificity forest plot (cancer types); Figure S4. Sensitivity and specificity forest plot (AI types); Figure S5. Sensitivity and specificity forest plot (vivo types); Figure S6. Deeks’ funnel plot for vivo; Figure S7. Deeks’ funnel plot for AI methods; Figure S8. Deeks’ funnel plot for cancer type; Figure S9. Deeks’ funnel plot for publication year; Figure S10. Accuracy chart of studies.

Author Contributions

Conceptualization, J.-H.L., R.K. and W.-S.L.; data curation, J.-H.L., W.-S.L. and R.K.; formal analysis, F.A., J.-H.L. and W.-S.L.; funding acquisition, A.M. and H.-C.W.; investigation, R.K. and F.A.; methodology, R.K. and F.A.; project administration, A.M. and H.-C.W.; resources, H.-C.W.; software, R.K. and A.M.; supervision, A.M. and H.-C.W.; writing—original draft, F.A.; writing—review and editing, F.A. and H.-C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Science and Technology Council, the Republic of China, under grants NSTC 113-2221-E-194-011-MY3. This work was financially/partially supported by the Ditmanson Medical Foundation Chia-Yi Christian Hospital (CYCH-2023-11), the Kaohsiung Armed Forces General Hospital research project KAFGH_D_113034 in Taiwan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in this article upon considerable request to the corresponding author (H.-C.W.). The current study has not been registered; however, upon request to the corresponding author, all the details regarding this systematic review will be provided, along with the template data collection forms, the data extracted from the included studies, the data used for all the analyses, the analytic code, and any other materials used in the review.

Conflicts of Interest

Author Hsiang-Chen Wang was employed by Hitspectra Intelligent Technology Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Siegel, R.L.; Miller, K.D.; DVM, A.J. Cancer Statistics. A Cancer J. Clin. 2017, 67, 7–30. [Google Scholar] [CrossRef]
  2. Tempany, C.M.C.; Jayender, J.; Kapur, T.; Bueno, R.; Golby, A.; Agar, N.; Jolesz, F.A. Multimodal imaging for improved diagnosis and treatment of cancer. Cancer 2015, 121, 817–825. [Google Scholar] [CrossRef]
  3. Gull, S.; Akbar, S. Artificial Intelligence in Brain Tumor Detection through MRI Scans. In Artificial Intelligence and Internet of Things; CRC Press: Boca Raton, FL, USA, 2021; Volume 1, 36p. [Google Scholar]
  4. Tandel, G.S.; Biswas, M.; Kakde, O.G.; Tiwari, A.; Suri, H.S.; Turk, M.; Laird, J.R.; Asare, C.K.; Ankrah, A.A.; Khanna, N.N.; et al. A review on a deep learning perspective in brain cancer classification. Cancers 2019, 11, 111. [Google Scholar] [CrossRef]
  5. Agarwal, A.K.; Sharma, N.; Jain, M.K. Brain tumor classification using CNN. Adv. Appl. Math. Sci. 2021, 20, 397–407. [Google Scholar]
  6. Cho, K.-T.; Wang, K.-C.; Kim, S.-K.; Shin, S.-H.; Chi, J.G.; Cho, B.-K. Pediatric brain tumors: Statistics of SNUH, Korea. Child’s Nerv. Syst. 2002, 18, 30–37. [Google Scholar]
  7. Sinha, T. Tumors: Benign and Malignant. Cancer Ther. Oncol. Int. J. 2018, 10, 52–54. [Google Scholar] [CrossRef]
  8. Strowd, R.E.; Blakeley, J.O. Common Histologically Benign Tumors of the Brain. Contin. Lifelong Learn. Neurol. 2017, 23, 1680–1708. [Google Scholar] [CrossRef] [PubMed]
  9. Handa, H.; Bucy, P.C. Benign cysts of the Brain Simulating Brain Tumor. J. Neurosurg. 1956, 13, 489–499. [Google Scholar] [CrossRef] [PubMed]
  10. Mitchell, D.A.; Fecci, P.E.; Sampson, J.H. Immunotherapy of malignant brain tumors. Immunol. Rev. 2008, 222, 70–100. [Google Scholar] [CrossRef]
  11. Smith, M.A.; Freidlin, B.; Ries, L.A.G.; Simon, R. Trends in Reported Incidence of Primary Malignant Brain Tumors in Children in the United States. J. Natl. Cancer Inst. 1998, 90, 1269–1277. [Google Scholar] [CrossRef]
  12. Patel, A. Benign vs Malignant tumors. JAMA Oncol. 2020, 6, 1488. [Google Scholar] [CrossRef]
  13. Devkota, B.; Alsadoon, A.; Prasad, P.W.; Singh, A.K.; Elchouemi, A. Elchouemi. Image Segmentation for Early Stage Brain Tumor Detection using Mathematical Morphological Reconstruction. Procedia Comput. Sci. 2018, 125, 115–123. [Google Scholar] [CrossRef]
  14. Alentorn, A.; Hoang-Xuan, K.; Mikkelsen, T. Presentinng signs and symptoms in brain tumors. Handb. Clin. Neurol. 2016, 134, 19–26. [Google Scholar]
  15. Bandyopadhyay, S.K. Detection of Brain Tumor-A Proposed Method. J. Glob. Res. Comput. Sci. 2011, 2, 56–64. [Google Scholar]
  16. Ari, A.; Hanbay, D. Deep learning based brain tumor classification and detection system. Turk. J. Electr. Eng. Comput. Sci. 2018, 26, 2275–2286. [Google Scholar] [CrossRef]
  17. Işın, A.; Direkoğlu, C.; Şah, M. Review of MRI-based brain tumor image segmentation using deep learning methods. Procedia Comput. Sci. 2016, 102, 317–324. [Google Scholar] [CrossRef]
  18. Shankar, G.M.; Balaj, L.; Stott, S.L.; Nahed, B.; Carter, B.S. Liquid biopsy for brain tumors. Expert Rev. Mol. Diagn. 2017, 17, 943–947. [Google Scholar] [CrossRef] [PubMed]
  19. Ostertag, C.; Mennel, H.D.; Kiessling, M. Stereotactic biopsy of brain tumors. Surg. Neurol. 1980, 14, 275–283. [Google Scholar]
  20. Rincon-Torroella, J.; Khela, H.; Bettegowda, A.; Bettegowda, C. Biomarkers and focused ultrasound: The future of liquid biopsy for brain tumor patients. J. Neuro Oncol. 2011, 156, 33–48. [Google Scholar] [CrossRef]
  21. van den Brekel, M.W.; Lodder, W.L.; Stel, H.V.; Bloemena, E.; Leemans, C.R.; van der Waal, I. Observer variation in the histopathologic assessment of extranodal tumor spread in lymph node metastases in the neck. Head Neck 2012, 34, 840–845. [Google Scholar] [CrossRef]
  22. Coleman, R.E.; Hoffman, J.M.; Hanson, M.W.; Sostman, H.D.; Schold, S.C. Clinical application of PET for the evaluation of brain tumors. J. Nucl. Med. 1991, 32, 616–622. [Google Scholar] [PubMed]
  23. Buzug, T.M. Computed Tomography. In Springer Handbook of Medical Technology; Springer: Berlin/Heidelberg, Germany, 2011; pp. 311–342. [Google Scholar]
  24. Gordillo, N.; Montseny, E.; Sobrevilla, P. State of the art survey on MRI brain tumor segmentation. Magn. Reson. Imaging 2013, 31, 1426–1438. [Google Scholar] [CrossRef]
  25. Katti, G.; Ara, S.A.; Shireen, A. Magnetic Resonance Imaging (MRI)—A Review. Int. J. Dent. Clin. 2011, 3, 65–70. [Google Scholar]
  26. Chen, W. Clinical Applications of PET in Brain Tumors. J. Nucl. Med. 2017, 48, 1468–1481. [Google Scholar] [CrossRef] [PubMed]
  27. Basu, S.; Kwee, T.C.; Surti, S.; Akin, E.A.; Yoo, D.; Alavi, A. Fundamentals of PET and PET/CT imaging. N. Y. Acad. Sci. 2011, 1228, 1–174. [Google Scholar] [CrossRef]
  28. Brooks, W.H.; Mortara, R.H.; Preston, D. The Clinical Limitations of Brain Scanning in Metastatic Disease. J. Nucl. Med. 1974, 15, 620–621. [Google Scholar]
  29. Siegel, R.L.; Wagle, N.S.; Cercek, A.; Smith, R.A.; Jemal, A. Colorectal cancer statistics. A Cancer J. Clin. 2023, 73, 233–254. [Google Scholar] [CrossRef]
  30. Siegel, R.L.; Miller, K.D.; Fedewa, S.A.; Ahnen, D.J.; Meester, R.G.; Barzi, A.; Jemal, A. Colorectal cancer statistics. CA A Cancer J. Clin. 2017, 67, 177–193. [Google Scholar] [CrossRef] [PubMed]
  31. Fei, B.; Lu, G.; Halicek, M.T.; Wang, X.; Zhang, H.; Little, J.V.; Magliocca, K.R.; Patel, M.; Griffith, C.C.; El-Deiry, M.W.; et al. Label-free hyperspectral imaging and quantification methods for surgical margin assessment of tissue specimens of cancer patients. In Proceedings of the 39th annual international conference of the IEEE engineering in medicine and biology society, Seogwipo, Republic of Korea, 11–15 July 2017; IEEE: Piscataway, NJ, USA; pp. 4041–4045. [Google Scholar]
  32. Lu, G.; Little, J.V.; Wang, X.; Zhang, H.; Patel, M.R.; Griffith, C.C.; El-Deiry, M.W.; Chen, A.Y.; Fei, B. Detection of head and neck cancer in surgical specimens using quantitative hyperspectral imaging. Clin. Cancer Res. 2017, 23, 5426–5436. [Google Scholar] [CrossRef]
  33. Kho, E.; de Boer, L.L.; Van de Vijver, K.K.; Sterenborg, H.J.; Ruers, T.J. Hyperspectral imaging for detection of breast cancer in resection margins using spectral-spatial classification. In Proceedings of the SPIE 10472, Diagnosis and Treatment of Diseases in the Breast and Reproductive System IV, San Francisco, CA, USA, 27–28 January 2018; SPIE: Bellingham, WA, USA, 2018; p. 104720f. [Google Scholar]
  34. Akbari, H.; Halig, L.V.; Schuster, D.M.; Osunkoya, A.; Master, V.; Nieh, P.; Chen, G.Z.; Fei, B. Hyperspectral imaging and quantitative analysis for prostate cancer detection. J. Biomed. Opt. 2012, 17, 076005. [Google Scholar] [CrossRef]
  35. Lu, G.; Qin, X.; Wang, D.; Muller, S.; Zhang, H.; Chen, A.; Chen, Z.G.; Fei, B. Hyperspectral imaging of neoplastic progression in a mouse model of oral carcinogenesis. In Proceedings of the SPIE 9788, Medical Imaging 2016: Biomedical Applications in Molecular, Structural, and Functional Imaging, San Diego, CA, USA, 1–3 March 2016; SPIE: Bellingham, WA, USA, 2016; p. 978812. [Google Scholar]
  36. Florimbi, G.; Fabelo, H.; Torti, E.; Lazcano, R.; Madroñal, D.; Ortega, S.; Salvador, R.; Leporati, F.; Danese, G.; Báez-Quevedo, A.; et al. Accelerating the K-nearest neighbors filtering algorithm to optimize the real-time classification of human brain tumor in hyperspectral images. Sensors 2018, 18, 2314. [Google Scholar] [CrossRef]
  37. Fabelo, H.; Ortega, S.; Ravi, D.; Kiran, B.R.; Sosa, C.; Bulters, D.; Callicó, G.M.; Bulstrode, H.; Szolna, A.; Piñeiro, J.F.; et al. Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations. PLoS ONE 2018, 13, e0193721. [Google Scholar] [CrossRef] [PubMed]
  38. Zarei, N.; Bakhtiari, A.; Gallagher, P.; Keys, M.; MacAulay, C. Automated prostate glandular and nuclei detection using hyperspectral imaging. In Proceedings of the IEEE 14th international symposium on biomedical imaging, Melbourne, VIC, Australia, 18–21 April 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1028–1031. [Google Scholar]
  39. Ortega, S.; Fabelo, H.; Camacho, R.; Plaza, M.L.; Callico, G.M.; Lazcano, R.; Madroñal, D.; Salvador, R.; Juárez, E.; Sarmiento, R. P03.18 detection of human brain cancer in pathological slides using hyperspectral images. Neuro-Oncology 2017, 19, 37. [Google Scholar] [CrossRef]
  40. Rich, T.C.; Leavesley, S. Classification of normal and Lesional colon tissue using fluorescence excitation-scanning hyperspectral imaging as a method for early diagnosis of colon cancer. In Proceedings of the National Conference on Undergraduate Research, University of Memphis, Memphis, TN, USA, 6–8 April 2017; pp. 1063–1073. [Google Scholar]
  41. Pike, R.; Lu, G.; Wang, D.; Chen, Z.G.; Fei, B. A minimum spanning forest-based method for noninvasive cancer detection with hyperspectral imaging. IEEE Trans. Biomed. Eng. 2016, 63, 653–663. [Google Scholar] [CrossRef] [PubMed]
  42. Regeling, B.; Thies, B.; Gerstner, A.O.; Westermann, S.; Müller, N.A.; Bendix, J.; Laffers, W. Hyperspectral imaging using flexible endoscopy for laryngeal cancer detection. Sensors 2016, 16, 1288. [Google Scholar] [CrossRef] [PubMed]
  43. Ravi, D.; Fabelo, H.; Callic, G.M.; Yang, G.Z. Manifold embedding and semantic segmentation for intraoperative guidance with hyperspectral brain imaging. IEEE Trans. Med. Imaging 2017, 36, 1845–1857. [Google Scholar] [CrossRef]
  44. Nathan, M.; Kabatznik, A.S.; Mahmood, A. Hyperspectral imaging for cancer detection and classification. In Proceedings of the 3rd Biennial South African Biomedical Engineering Conference, Stellenbosch, South Africa, 4–6 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]
  45. Ortega, S.; Callicó, G.M.; Plaza, M.L.; Camacho, R.; Fabelo, H.; Sarmiento, R. Hyperspectral database of pathological in-vitro human brain samples to detect carcinogenic tissues. In Proceedings of the IEEE 13th International Symposium on Biomedical Imaging, Prague, Czech Republic, 13–16 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 369–372. [Google Scholar]
  46. Gao, X.W.; Hui, R.; Tian, Z. Classification of CT brain images based on deep learning networks. Comput. Methods Programs Biomed. 2017, 138, 49–56. [Google Scholar] [CrossRef]
  47. Özyurt, F.; Toğaçar, M.; Avcı, E.; Avcı, D. Classification of breast cancer images by using of convolutional attribute of ANN. In Proceedings of the 2018 3rd International Conference on Computer Science and Engineering (UBMK), Sarajevo, Bosnia and Herzegovina, 20–23 September 2018; pp. 420–423. [Google Scholar]
  48. Vijayakumar, D.T. Classification of Brain Cancer Type Using Machine Learning. J. Artif. Intell. Capsul. Netw. 2018, 1, 105–113. [Google Scholar]
  49. Koo YE, L.; Reddy, G.R.; Bhojani, M.; Schneider, R.; Philbert, M.A.; Rehemtulla, A.; Ross, B.D.; Kopelman, R. Brain cancer diagnosis and therapy with nanoplatforms. Adv. Drug Deliv. Rev. 2006, 58, 1556–1577. [Google Scholar] [CrossRef]
  50. Zhou, Y.; Liu, C.H.; Sun, Y.; Pu, Y.; Boydston-White, S.; Liu, Y.; Alfano, R.R. Human brain cancer studied by resonance Raman spectroscopy. J. Biomed. Opt. 2012, 17, 116021. [Google Scholar] [CrossRef]
  51. El-Dahshan ES, A.; Mohsen, H.M.; Revett, K.; Salem AB, M. Computer-aided diagnosis of human brain tumor through MRI: A survey and a new algorithm. Expert Syst. Appl. 2014, 41, 5526–5545. [Google Scholar] [CrossRef]
  52. Starr, C.; Evers, C.; Starr, L. Biology: Concepts and Applications without Physiology; Cengage Learning: Boston, MA, USA, 2010. [Google Scholar]
  53. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef] [PubMed]
  54. Goetz, A.F.; Vane, G.; Solomon, J.E.; Rock, B.N. Imaging spectrometry for earth remote sensing. Science 1985, 228, 1147–1153. [Google Scholar] [CrossRef]
  55. Calin, M.A.; Parasca, S.V.; Savastru, D.; Manea, D. Hyperspectral imaging in the medical field: Present and future. Appl. Spectrosc. Rev. 2013, 49, 435–447. [Google Scholar] [CrossRef]
  56. Fabelo, H.; Ortega, S.; Szolna, A.; Bulters, D.; Piñeiro, J.F.; Kabwama, S.; J-O’Shanahan, A.; Bulstrode, H.; Bisshopp, S.; Ravi Kiran, B.; et al. In-Vivo Hyperspectral Human Brain Image Database for Brain Cancer Detection. IEEE Access 2019, 7, 39098–39116. [Google Scholar] [CrossRef]
  57. Kamruzzaman, M.; Sun, D.W. Introduction to Hyperspectral Imaging Technology. In Computer Vision Technology for Food Quality Evaluation; Academic Press: Cambridge, MA, USA, 2016; pp. 111–139. [Google Scholar]
  58. Halicek, M.; Fabelo, H.; Ortega, S.; Callico, G.M.; Fei, B. In-Vivo and Ex-Vivo Tissue Analysis through Hyperspectral Imaging Techniques: Revealing the Invisible Features of Cancer. Cancers 2019, 11, 756. [Google Scholar] [CrossRef]
  59. Wu, D.; Sun, D.W. Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment. A review—Part I: Fundamentals. Innov. Food Sci. Emerg. Technol. 2013, 19, 1–14. [Google Scholar] [CrossRef]
  60. Pan, Z.; Healey, G.; Prasad, M.; Tromberg, B. Face recognition in hyperspectral images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 1552–1560. [Google Scholar]
  61. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag 2013, 1, 6–36. [Google Scholar] [CrossRef]
  62. Akbari, H.; Kosugi, Y.; Kojima, K.; Tanaka, N. Detection and analysis of the intestinal ischemia using visible and invisible hyperspectral imaging. IEEE Geosci. Remote Sens. Mag. 2010, 57, 2011–2017. [Google Scholar] [CrossRef]
  63. Fischer, C.; Kakoulli, I. Multispectral and hyperspectral imaging technologies in conservation: Current research and potential applications. Stud. Conserv. 2006, 51, 3–16. [Google Scholar] [CrossRef]
  64. Edelman, G.J.; Gaston, E.; Van Leeuwen, T.G.; Cullen, P.J.; Aalders, M.C. Hyperspectral imaging for non-contact analysis of forensic traces. Forensic Sci. Int. 2012, 223, 28–39. [Google Scholar] [CrossRef] [PubMed]
  65. Malkoff, D.B.; Oliver, W.R. Hyperspectral imaging applied to forensic medicine. Proc. SPIE 2000, 3920, 108–116. [Google Scholar]
  66. Kuula, J.; Pölönen, I.; Puupponen, H.H.; Selander, T.; Reinikainen, T.; Kalenius, T.; Saari, H. Using VIS/NIR and IR spectral cameras for detecting and separating crime scene details. In Proceedings of the SPIE 8359, Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense, Baltimore, MD, USA, 23–25 April 2012; Volume XI, p. 83590. [Google Scholar]
  67. Schuler, R.L.; Kish, P.E.; Plese, C.A. Preliminary observations on the ability of hyperspectral imaging to provide detection and visualization of bloodstain patterns on black fabrics. J. Forensic Sci. 2012, 57, 1562–1569. [Google Scholar] [CrossRef]
  68. Rodrigues, E.M.; Hemmer, E. Trends in hyperspectral imaging: From environmental and health sensing to structure-property and nano-bio interaction studies. Anal. Bioanal. Chem. 2022, 414, 4269–4279. [Google Scholar] [CrossRef] [PubMed]
  69. Mukundan, A.; Tsao, Y.M.; Wang, H.C. Detection of Counterfeit Holograms Using Hyperspectral Imaging; SPIE: Bellingham, WA, USA, 2013; Volume 12768. [Google Scholar]
  70. Huang, S.-Y.; Karmakar, R.; Chen, Y.-Y.; Hung, W.-C.; Mukundan, A.; Wang, H.-C. Large-Area Film Thickness Identification of Transparent Glass by Hyperspectral Imaging. Sensors 2024, 24, 5094. [Google Scholar] [CrossRef]
  71. Fang, Y.J.; Huang, C.W.; Karmakar, R.; Mukundan, A.; Tsao, Y.M.; Yang, K.Y.; Wang, H.C. Assessment of Narrow-Band Imaging Algorithm for Video Capsule Endoscopy Based on Decorrelated Color Space for Esophageal Cancer: Part II, Detection and Classification of Esophageal Cancer. Cancers 2024, 16, 572. [Google Scholar] [CrossRef]
  72. Nasir, F.A.; Liaquat, S.; Khurshid, K.; Mahyuddin, N.M. A hyperspectral unmixing approach for ink mismatch detection in unbalanced clusters. J. Inf. Intell. 2024, 2, 2949–7159. [Google Scholar] [CrossRef]
  73. Mukundan, A.; Patel, A.; Saraswat, K.D.; Tomar, A.; Wang, H. Novel Design of a Sweeping 6-Degree of Freedom Lunar Penetrating Radar. In Proceedings of the AIAA AVIATION 2023 Forum, San Diego, CA, USA, 12–16 June 2023. [Google Scholar]
  74. Mukundan, A.; Wang, H.-C.; Tsao, Y.-M. A Novel Multipurpose Snapshot Hyperspectral Imager used to Verify Security Hologram. In Proceedings of the 2022 International Conference on Engineering and Emerging Technologies (ICEET), Kuala Lumpur, Malaysia, 27–28 October 2022; pp. 1–3. [Google Scholar]
  75. Mukundan, A.; Huang, C.C.; Men, T.C.; Lin, F.C.; Wang, H.C. Air Pollution Detection Using a Novel Snap-Shot Hyperspectral Imaging Technique. Sensors 2022, 22, 6231. [Google Scholar] [CrossRef]
  76. Wang, C.Y.; Mukundan, A.; Liu, Y.S.; Tsao, Y.M.; Lin, F.C.; Fan, W.S.; Wang, H.C. Optical Identification of Diabetic Retinopathy Using Hyperspectral Imaging. J. Pers. Med. 2023, 13, 939. [Google Scholar] [CrossRef]
  77. Mukundan, A.; Tsao, Y.-M.; Lin, F.-C.; Wang, H.-C. Portable and low-cost hologram verification module using a snapshot-based hyperspectral imaging algorithm. Sci. Rep. 2022, 12, 18475. [Google Scholar] [CrossRef] [PubMed]
  78. Mukundan, A.; Tsao, Y.M.; Cheng, W.M.; Lin, F.C.; Wang, H.C. Automatic Counterfeit Currency Detection Using a Novel Snapshot Hyperspectral Imaging Algorithm. Sensors 2023, 23, 2026. [Google Scholar] [CrossRef]
  79. Afromowitz, M.; Callis, J.; Heimbach, D.; DeSoto, L.; Norton, M. Multispectral imaging of burn wounds: A new clinical instrument for evaluating burn depth. IEEE Trans. Biomed. Eng. 1988, 35, 842–850. [Google Scholar] [CrossRef]
  80. Carrasco, O.; Gomez, R.B.; Chainani, A.; Roper, W.E. Hyperspectral imaging applied to medical diagnoses and food safety. Geo-Spat. Temporal Image Data Exploit. 2003, III, 215–221. [Google Scholar]
  81. Zuzak, K.J.; Naik, S.C.; Alexandrakis, G.; Hawkins, D.; Behbehani, K.; Livingston, E.H. Characterization of a near-infrared laparoscopic hyperspectral imaging system for minimally invasive surgery. Anal. Chem. 2017, 79, 4709–4715. [Google Scholar] [CrossRef]
  82. Benavides, J.M.; Chang, S.; Park, S.Y.; Richards-Kortum, R.; Mackinnon, N.; MacAulay, C.; Milbourne, A.; Malpica, A.; Follen, M. Multispectral digital colposcopy for in vivo detection of cervical cancer. Opt. Express 2003, 11, 1223–1236. [Google Scholar] [CrossRef] [PubMed]
  83. Hirohara, Y.; Okawa, Y.; Mihashi, T.; Yamaguchi, T.; Nakazawa, N.; Tsuruga, Y.; Aoki, H.; Maeda, N.; Uchida, I.; Fujikado, T. Validity of retinal oxygen saturation analysis: Hyperspectral imaging in visible wavelength with fundus camera and liquid crystal wavelength tunable filter. Opt. Rev. 2007, 14, 151–158. [Google Scholar] [CrossRef]
  84. Fawzi, A.A.; Lee, N.; Acton, J.H.; Laine, A.F.; Smith, R.T. Recovery of macular pigment spectrum in vivo using hyperspectral image analysis. J. Biomed. Opt. 2011, 16, 106008. [Google Scholar] [CrossRef]
  85. Bégin, S.; Burgoyne, B.; Mercier, V.; Villeneuve, A.; Vallée, R.; Côté, D. Coherent anti-stokes Raman scattering hyperspectral tissue imaging with a wavelength-swept system. Biomed. Opt. Express 2011, 2, 1296–1306. [Google Scholar] [CrossRef]
  86. Schweizer, J.; Hollmach, J.; Steiner, G.; Knels, L.; Funk, R.H.W.; Koch, E. Hyperspectral imaging—A new modality for eye diagnostics. Biomed. Tech. 2012, 57, 293–296. [Google Scholar] [CrossRef]
  87. Palmer, G.M.; Fontanella, A.N.; Zhang, G.; Hanna, G.; Fraser, C.L.; Dewhirst, M.W. Optical imaging of tumor hypoxia dynamics. J. Biomed. Opt. 2010, 15, 066021. [Google Scholar] [CrossRef] [PubMed]
  88. Maeder, U.; Marquardt, K.; Beer, S.; Bergmann, T.; Schmidts, T.; Heverhagen, J.; Zink, K.; Runkel, F.; Fiebich, M. Evaluation and quantification of spectral information in tissue by confocal microscopy. J. Biomed. Opt. 2012, 17, 106011. [Google Scholar] [CrossRef] [PubMed]
  89. Manni, F.; Fonolla, R.; van der Sommen, F.; Zinger, S.; Shan, C.; Kho, E.; de Koning, S.B.; Ruers, T.; de With, P.H. Hyperspectral imaging for colon cancer classification in surgical specimens: Towards optical biopsy during image-guided surgery. In Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1169–1173. [Google Scholar]
  90. Tsai, T.-J.; Mukundan, A.; Chi, Y.-S.; Tsao, Y.-M.; Wang, Y.-K.; Chen, T.-H.; Wu, I.-C.; Huang, C.-W.; Wang, H.-C. Intelligent Identification of Early Esophageal Cancer by Band-Selective Hyperspectral Imaging. Cancers 2022, 14, 4292. [Google Scholar] [CrossRef]
  91. Goto, A.; Nishikawa, J.; Kiyotoki, S.; Nakamura, M.; Nishimura, J.; Okamoto, T.; Ogihara, H.; Fujita, Y.; Hamamoto, Y.; Sakaida, I. Use of hyperspectral imaging technology to develop a diagnostic support system for gastric cancer. J. Biomed. Opt. 2015, 20, 016017. [Google Scholar] [CrossRef]
  92. Wang, X.; Wang, T.; Zheng, Y.; Yin, X. Hyperspectral-attention mechanism-based improvement of radiomics prediction method for primary liver cancer. Photodiagnosis Photodyn. Ther. 2021, 36, 102486. [Google Scholar] [CrossRef] [PubMed]
  93. Aboughaleb, I.H.; Aref, M.H.; El-Sharkawy, Y.H. El-Sharkawy. Hyperspectral imaging for diagnosis and detection of ex-vivo breast cancer. Photodiagnosis Photodyn. Ther. 2020, 31, 101922. [Google Scholar] [CrossRef] [PubMed]
  94. Chen, R.; Luo, T.; Nie, J.; Chu, Y. Blood cancer diagnosis using hyperspectral imaging combined with the forward searching method and machine learning. Anal. Methods 2023, 15, 3885–3892. [Google Scholar] [CrossRef]
  95. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  96. Whiting, P.F.; Rutjes, A.W.S.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.G.; Sterne, J.A.C.; Bossuyt, P.M.M.; QUADAS-2 Group. QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies. Ann. Intern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef]
  97. Martinez, B.; Leon, R.; Fabelo, H.; Ortega, S.; Piñeiro, J.F.; Szolna, A.; Hernandez, M.; Espino, C.; O’Shanahan, A.J.; Carrera, D.; et al. Most Relevant Spectral Bands Identification for Brain Cancer Detection Using Hyperspectral imaging. Sensors 2019, 19, 5481. [Google Scholar] [CrossRef]
  98. Urbanos, G.; Martín, A.; Vázquez, G.; Villanueva, M.; Villa, M.; Jimenez-Roldan, L.; Chavarrías, M.; Lagares, A.; Juárez, E.; Sanz, C. Supervised Machine Learning Methods and Hyperspectral Imaging Techniques Jointly Applied for Brain Cancer Classification. Sensors 2021, 21, 3827. [Google Scholar] [CrossRef]
  99. Ortega, S.; Fabelo, H.; Camacho, R.; de la Luz Plaza, M.; Callicó, G.M.; Sarmiento, R. Detecting brain tumor in pathological slides using hyperspectral imaging. Biomed. Opt. Express 2018, 9, 818–831. [Google Scholar] [CrossRef]
  100. Manni, F.; van der Sommen, F.; Fabelo, H.; Zinger, S.; Shan, C.; Edström, E.; Elmi-Terander, A.; Ortega, S.; Marrero Callicó, G.; de With, P.H.N. Hyperspectral Imaging for Glioblastoma Surgery: Improving Tumor Identification Using a Deep Spectral-Spatial Approach. Sensors 2020, 20, 6955. [Google Scholar] [CrossRef]
  101. Ortega, S.; Halicek, M.; Fabelo, H.; Camacho, R.; Plaza, M.d.l.L.; Godtliebsen, F.; Callicó, G.M.; Fei, B. Hyperspectral Imaging for the Detection of Glioblastoma Tumor Cells in H&E Slides Using Convolutional Neural Networks. Sensors 2020, 20, 1911. [Google Scholar] [CrossRef]
  102. Fabelo, H.; Halicek, M.; Ortega, S.; Shahedi, M.; Szolna, A.; Piñeiro, J.F.; Sosa, C.; O’Shanahan, A.J.; Bisshopp, S.; Espino, C.; et al. Deep Learning-Based Framework for In Vivo Identification of Glioblastoma Tumor using Hyperspectral Images of Human Brain. Sensors 2019, 19, 920. [Google Scholar] [CrossRef]
  103. Yin, J.; Tian, L. Joint inference about sensitivity and specificity at the optimal cut-off point associated with Youden index. Comput. Stat. Data Anal. 2014, 77, 1–13. [Google Scholar] [CrossRef]
  104. Khan, S.I.; Rahman, A.; Debnath, T.; Karim, R.; Nasir, M.K.; Band, S.S.; Mosavi, A.; Dehzangi, I. Accurate brain tumor detection using deep convolutional neural network. Comput. Struct. Biotechnol. J. 2022, 20, 4733–4745. [Google Scholar] [CrossRef] [PubMed]
  105. Writing Committee Members; Brott, T.G.; Halperin, J.L.; Abbara, S.; Bacharach, J.M.; Barr, J.D.; Bush, R.L.; Cates, C.U.; Creager, M.A.; Fowler, S.B.; et al. 2011 ASA/ACCF/AHA/AANN/AANS/ACR/ASNR/CNS/SAIP/SCAI/SIR/SNIS/SVM/SVS Guideline on the Management of Patients with Extracranial Carotid and Vertebral Artery Disease. Stroke 2011, 42, e464–e540. [Google Scholar] [PubMed]
  106. McNeill, K.A. Epidemiology of Brain Tumors. Neurol. Clin. 2016, 34, 981–998. [Google Scholar] [CrossRef]
  107. Nazir, M.; Shakil, S.; Khurshid, K. Role of deep learning in brain tumor detection and classification (2015 to 2020): A review. Comput. Med. Imaging Graph. 2021, 91, 101940. [Google Scholar] [CrossRef]
Figure 1. New research on diagnosing brain cancer by combining the advantages and disadvantages of HSI and CAD technology.
Figure 1. New research on diagnosing brain cancer by combining the advantages and disadvantages of HSI and CAD technology.
Diagnostics 14 01888 g001
Figure 3. PRISMA 2020 flow diagram.
Figure 3. PRISMA 2020 flow diagram.
Diagnostics 14 01888 g003
Figure 4. Overall accuracy performance of CAD methods. Different colors represent the methods used in the study [32,97,98,99,100,101,102].
Figure 4. Overall accuracy performance of CAD methods. Different colors represent the methods used in the study [32,97,98,99,100,101,102].
Diagnostics 14 01888 g004
Figure 5. Deeks’ funnel plot of the studies used in the meta-analysis [32,97,98,99,100,101,102].
Figure 5. Deeks’ funnel plot of the studies used in the meta-analysis [32,97,98,99,100,101,102].
Diagnostics 14 01888 g005
Figure 6. Univariable meta-regression of different subgroup analysis including vivo, cancer type, AI, and publication year. (a) Sensitivity, (b) specificity.
Figure 6. Univariable meta-regression of different subgroup analysis including vivo, cancer type, AI, and publication year. (a) Sensitivity, (b) specificity.
Diagnostics 14 01888 g006
Table 1. Studies on HSI in cancer detection.
Table 1. Studies on HSI in cancer detection.
YearAuthorApplicationSpectral Range (nm)
2012 [34]Hamed Akbari et al.Prostate cancer500 to 950 nm
2017 [32]Guolan Lu et al.Head and neck cancer450 to 900 nm
2020 [89]Francesca Manni et al.Colon cancerup to 1700 nm
2022 [90]Tsung-Jung Tsai at al.Esophageal cancer415 and 540 nm
2015 [91]Atsushi Goto et al.Gastric cancer1000 to 2500 nm
2021 [92]Xuehu Wang et al.Liver cancer400–1000 nm
2020 [93]Ibrahim H et al.Breast cancer400–700 nm
2023 [94]Riheng Chen et al.Blood cancer400–1000 nm
Table 2. QUADAS-2 results of the studies.
Table 2. QUADAS-2 results of the studies.
StudyRisk of BiasApplicability Concerns
Patient SelectionIndex TestReference StandardFlow and TimingPatient SelectionIndex TestReference Standard
Beatriz Martinez et al., 2019 [97]Diagnostics 14 01888 i001Diagnostics 14 01888 i001?Diagnostics 14 01888 i001?Diagnostics 14 01888 i001?
Gemma Urbanos et al., 2021 [98]Diagnostics 14 01888 i001Diagnostics 14 01888 i001?Diagnostics 14 01888 i001Diagnostics 14 01888 i001Diagnostics 14 01888 i001?
Samuel Ortega et al., 2018 [99]?Diagnostics 14 01888 i001Diagnostics 14 01888 i001Diagnostics 14 01888 i001Diagnostics 14 01888 i001Diagnostics 14 01888 i001Diagnostics 14 01888 i001
Francesca Manni et al., 2020 [100]Diagnostics 14 01888 i001Diagnostics 14 01888 i001?Diagnostics 14 01888 i001?Diagnostics 14 01888 i001?
Samuel Ortega et al., 2020 [101]?Diagnostics 14 01888 i001Diagnostics 14 01888 i001Diagnostics 14 01888 i001??Diagnostics 14 01888 i001
Himar Fabelo et al., 2018 [32]Diagnostics 14 01888 i001??Diagnostics 14 01888 i001Diagnostics 14 01888 i001??
Himar Fabelo et al., 2019 [102]Diagnostics 14 01888 i001???Diagnostics 14 01888 i001??
Diagnostics 14 01888 i001 Low Risk, ? Unclear Risk, Diagnostics 14 01888 i002 High Risk.
Table 3. Clinical features of the studies considered in this review, including the nationality, method, lighting, accuracy, sensitivity, specificity, the number of images, and AUC of the studies.
Table 3. Clinical features of the studies considered in this review, including the nationality, method, lighting, accuracy, sensitivity, specificity, the number of images, and AUC of the studies.
Author YearNationalityMethodBandVivoIndex NumberDatasetAccuracy (%)Sensitivity (%)Specificity (%)AUC (%)
Beatriz Martinez et al., 2019 [97]WesternSVM band L1VNIRIn Vivo12677.952.794.6N/A
SVM band L22775791.2
SVM band L3353.857.670.3
Gemma Urbanos et al., 2021 [98]WesternSVMVNIRIn Vitro41376.52691N/A
RF582.548.599
CNN67747.599
SAMUEL ORTEGA et al., 2018 [99]WesternSVMVNIRIn Vivo721 biopsies75.5375.6970.97N/A
ANN878.0275.4477.03
RF969.1372.9479.33
Francesca Manni et al., 2020 [100]Western3D–2D CNNVNIRIn Vivo102680689870
3D–2D CNN + SVM1175429876
SVM1276439870
2DCNN1372149771
1DDNN1478199789
SAMUEL ORTEGA et al., 2020 [101]WesternCNN HISVNIRIn Vivo1513 biopsies85887787
CNN RGB1680816884
Himar Fabelo et al., 2018 [32]WesternSVMVNIRIn Vivo17599.7299.6299.91N/A
Himar Fabelo et al., 2019 [102]Western1D-DNNVNIRIn Vivo1826948810099
2D-CNN19887610097
SVM RBF Opt.20846810097
SVM RBF Def.2173588886
SVM Linear Opt.22775410099
SVM Linear Def2368498886
Table 4. Subgroup and meta-analysis of diagnostic test accuracy, including the classification of data based on nationality, machine-learning model, images, brain cancer type, and publication date.
Table 4. Subgroup and meta-analysis of diagnostic test accuracy, including the classification of data based on nationality, machine-learning model, images, brain cancer type, and publication date.
SubgroupNumber of StudiesAccuracy (%)Sensitivity (%)Specificity (%)AUC (%)
Average meta-analysis of all studies878.9662.4390.0581.06
Vivo
In vivo678.7256.8492.7085.46
In vitro279.9384.8179.4693.05
Machine Learning
SVM1176.2258.2390.1887.6
CNN680.3462.4189.8481.8
DNN28653.598.594
RF275.8160.7289.16N/A
ANN178.0275.4477.03N/A
CNN + SVM175429876
Cancer Type
GBM678.0561.9489.6185.46
Gliomas178.6740.6796.34N/A
Meningiomas188.51008593.05
Publishing Date
2018–2020678.0561.9489.6185.46
2021–2024282.664.491.893.05
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leung, J.-H.; Karmakar, R.; Mukundan, A.; Lin, W.-S.; Anwar, F.; Wang, H.-C. Technological Frontiers in Brain Cancer: A Systematic Review and Meta-Analysis of Hyperspectral Imaging in Computer-Aided Diagnosis Systems. Diagnostics 2024, 14, 1888. https://doi.org/10.3390/diagnostics14171888

AMA Style

Leung J-H, Karmakar R, Mukundan A, Lin W-S, Anwar F, Wang H-C. Technological Frontiers in Brain Cancer: A Systematic Review and Meta-Analysis of Hyperspectral Imaging in Computer-Aided Diagnosis Systems. Diagnostics. 2024; 14(17):1888. https://doi.org/10.3390/diagnostics14171888

Chicago/Turabian Style

Leung, Joseph-Hang, Riya Karmakar, Arvind Mukundan, Wen-Shou Lin, Fathima Anwar, and Hsiang-Chen Wang. 2024. "Technological Frontiers in Brain Cancer: A Systematic Review and Meta-Analysis of Hyperspectral Imaging in Computer-Aided Diagnosis Systems" Diagnostics 14, no. 17: 1888. https://doi.org/10.3390/diagnostics14171888

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop