Next Article in Journal
Presepsin Does Not Predict Risk of Death in Sepsis Patients Admitted to the Intensive Care Unit: A Prospective Single-Center Study
Previous Article in Journal
From Gut to Blood: Redistribution of Zonulin in People Living with HIV
Previous Article in Special Issue
The Impact of Clinical Prognosis of Viral Hepatitis in Head and Neck Cancer Patients Receiving Concurrent Chemoradiotherapy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancements in Hyperspectral Imaging and Computer-Aided Diagnostic Methods for the Enhanced Detection and Diagnosis of Head and Neck Cancer

1
Division of Gastroenterology, Department of Internal Medicine, Kaohsiung Medical University Hospital, Kaohsiung Medical University, No. 100, Tzyou 1st Rd., Sanmin Dist., Kaohsiung City 80756, Taiwan
2
Department of Medicine, Faculty of Medicine, College of Medicine, Kaohsiung Medical University, No. 100, Tzyou 1st Rd., Sanmin Dist., Kaohsiung City 80756, Taiwan
3
Department of Gastroenterology, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, No. 2, Minsheng Road, Dalin, Chiayi 62247, Taiwan
4
Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chiayi 62102, Taiwan
5
Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, No. 42, Avadi-Vel Tech Road Vel Nagar, Avadi, Chennai 600062, Tamil Nadu, India
6
Department of Internal Medicine, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st Rd., Lingya District, Kaohsiung City 80284, Taiwan
7
School of Medicine, National Defense Medical Center, No. 161, Sec. 6, Minquan E. Rd., Neihu Dist., Taipei City 11490, Taiwan
8
Hitspectra Intelligent Technology Co., Ltd., 8F. 11-1, No. 25, Chenggong 2nd Rd., Qianzhen Dist., Kaohsiung City 80661, Taiwan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Biomedicines 2024, 12(10), 2315; https://doi.org/10.3390/biomedicines12102315
Submission received: 23 August 2024 / Revised: 12 September 2024 / Accepted: 16 September 2024 / Published: 11 October 2024
(This article belongs to the Special Issue Novel Approaches towards Targeted Head and Neck Cancer Therapies)

Abstract

:
Background/Objectives: Head and neck cancer (HNC), predominantly squamous cell carcinoma (SCC), presents a significant global health burden. Conventional diagnostic approaches often face challenges in terms of achieving early detection and accurate diagnosis. This review examines recent advancements in hyperspectral imaging (HSI), integrated with computer-aided diagnostic (CAD) techniques, to enhance HNC detection and diagnosis. Methods: A systematic review of seven rigorously selected studies was performed. We focused on CAD algorithms, such as convolutional neural networks (CNNs), support vector machines (SVMs), and linear discriminant analysis (LDA). These are applicable to the hyperspectral imaging of HNC tissues. Results: The meta-analysis findings indicate that LDA surpasses other algorithms, achieving an accuracy of 92%, sensitivity of 91%, and specificity of 93%. CNNs exhibit moderate performance, with an accuracy of 82%, sensitivity of 77%, and specificity of 86%. SVMs demonstrate the lowest performance, with an accuracy of 76% and sensitivity of 48%, but maintain a high specificity level at 89%. Additionally, in vivo studies demonstrate superior performance when compared to ex vivo studies, reporting higher accuracy (81%), sensitivity (83%), and specificity (79%). Conclusion: Despite these promising findings, challenges persist, such as HSI’s sensitivity to external conditions, the need for high-resolution and high-speed imaging, and the lack of comprehensive spectral databases. Future research should emphasize dimensionality reduction techniques, the integration of multiple machine learning models, and the development of extensive spectral libraries to enhance HSI’s clinical utility in HNC diagnostics. This review underscores the transformative potential of HSI and CAD techniques in revolutionizing HNC diagnostics, facilitating more accurate and earlier detection, and improving patient outcomes.

1. Introduction

On a global scale, head and neck cancer (HNC) is the seventh most frequently occurring cancer [1], resulting in 325,000 deaths and more than 660,000 new cases yearly [2]. Squamous cell carcinoma (SCC), which originates from the epithelial lining of the oral cavity, throat, and larynx, accounts for about 90% of HNC cases [3]. Diseases of the upper aerodigestive tract, encompassing all lesions of the mucosal surfaces of the nasal cavity, oral cavity, and nasopharynx down to the larynx, hypopharynx, and trachea, are essentially considered to comprise cancers of the head and neck [4]. In 2030, 439,000 cases of oral and oropharyngeal cancer are expected to be reported, based on estimates from the World Health Organization (WHO) [5]. A retrospective study of 9950 patients looked at a number of risk variables for HNC and its subtypes [6]. It was discovered that the greatest independent risk factor for an elevated risk of HNC is smoking, and HNC also has synergistic correlations with alcohol consumption [6,7,8,9,10,11,12,13,14]. Human papillomavirus (HPV) infection is known to be a major risk factor for oropharyngeal cancers (OPCs), particularly in younger age groups [15]. Additional risk factors for HNC include dental plaque accumulation, poor dental hygiene, long-term irritation of the oral lining, dietary factors, a low body mass index, and UV light exposure. These are risks because they can alter toxin and carcinogen metabolism, which can lead to the development of HNSCC [16,17,18]. In 2008, Molina et al. revealed that the 5-year overall survival rate among HNSCC patients in the United States was 39.5%, with rates of 34% for men and 52% for women; comparatively, the 1-year overall survival rate among HNSCC patients was 84%, with rates of 59% for men and 62% for women [19]. From 2016 to 2018, there was a rise in the number of patients with advanced stages of HNC in the UK, with higher percentages of patients with advanced disease in Scotland, Wales, and Northern Ireland (65–69%) than in England (58%) [20]. When combined therapeutic approaches are used, the global 5-year survival rates for surgery alone, surgery + adjuvant therapy, and exclusive radiation therapy, with or without chemotherapy, range from 50% to 65% [21].
Artificial intelligence (AI) is becoming more widely used in diagnostics as a result of the collection of massive digital datasets from the scanning of pathological glass slides [22]. A large number of researchers are currently developing computer-aided diagnostic (CAD) models for cancer detection and diagnosis at an early stage [23,24,25,26,27,28,29,30,31,32,33]. These include machine learning (ML) algorithms trained on medical imaging data from CT, MRI, and PET scans to automatically detect worrisome lesions or areas suggestive of cancer. The increasing use of ML has generated considerable interest, backed by a growing body of evidence showing its wide applicability in various cancer types [34,35,36,37,38,39,40,41,42]. In a study by Rahman et al., the authors distinguished between normal and SCC tissue in oral histology images using a support vector machine (SVM); the resulting diagnostic accuracy was 100% [43]. Krishnan et al. used linear discriminant analysis (LDA) and neural networks to categorize oral premalignant lesions into normal and atrophy-causing oral submucous fibrosis cases, which resulted in diagnostic sensitivity and specificity values of 92.31% and 100%, respectively [44]. Dos santos et al. used a CNN to detect cancerous regions in samples of oral cavity tissues, resulting in an accuracy of 97.6% and precision of 91.1% [45]. Furthermore, various reports have demonstrated that machine learning models surpass traditional statistical methods when it comes to tasks associated with head and neck cancer [46,47,48,49,50,51]. Biosensors have also emerged as a promising tool in the diagnosis of cancers, using biological molecules to recognize and detect specific biomarkers associated with cancer. Wang et al. showed how to detect and quantify SCC-Ag levels using immobilized SCC-Ag antibodies and an interdigitated electrode sensor modified with titanium oxide (TiO2). The sensitivity of the TiO2-modified sensor was 1000 times higher than that of other substrates [52]. Using surface-enhanced Raman scattering (SERS), Vohra et al. were able to differentiate SCC from other cell lines with a sensitivity of 93% and specificity of 100% [53]. In 2018, Soares et al. detected human papillomavirus in HNC cases using a microfluidic-based geno-sensor and managed to differentiate between HPV16-positive and -negative HNC cell lines [54]. In 2020, Olivia et al. used HNC cell lines to detect MGMT gene methylation using a self-assembled monolayer matrix-based geno-sensor. With a limit of detection of 0.24 × 10−12 mol L−1 throughout a wide concentration range of 1.0 × 10−11 to 1.0 × 10−6 mol L−1, the geno-sensor showed a high degree of sensitivity [55]. With the use of SERS and the MD analysis of biomarkers, Edoardo et al. were able to identify HNC and uncover the underlying biomolecular process involved in disease-related marker adsorption on silver surfaces [56].
Although HNC detection with CAD systems may be beneficial, there are drawbacks to this technology as well. The efficacy of CADx hinges on proficient medical image analysis; such analysis is pivotal as it directly impacts the clinical diagnosis and treatment process [57]. Traditional red, green, and blue (RGB) imaging is limited to capturing the visible spectrum’s three diffuse Gaussian spectral bands (380–740 nm) [58]. According to Bueno et al., there are still limitations to the amounts of data that can be processed and the processing techniques that are applicable when whole-slide imaging (WSI) is used for image processing [59]. Drukker et al. observed that CAD systems’ performance does not always predict their impact when they are employed in radiologists’ clinical practice [60]. Furthermore, research by Chang et al. and Acharya et al. demonstrated that CAD systems are impractical for real-time applications and require complicated computer analysis [61,62]. Occasionally, CAD tools provide inaccurate information regarding images, making them inappropriate for the early identification of malignancy [63]. Neal et al. noted that initial implementations of artificial neural networks (ANNs) required considerable tuning to achieve satisfactory results [64]. Additionally, CAD models are ineffective when the user provides incorrect data inputs during data gathering [65,66,67]. However, hyperspectral imaging (HSI) emerges as a non-invasive technique capable of overcoming the challenges outlined above [68,69]. In HSI, it is possible to extract basic features like color and texture, as well as analyzing more complex semantic features [70].
HSI is a hybrid diagnostic modality that extracts spectroscopic data from images [71]. It combines digital imaging with spectroscopy to increase the spectral quality of images within and beyond the visible spectrum [72]. Goetz et al. reported that NASA initially developed HSI for use in Earth surveillance and space exploration [73]. When spatial and spectral information is acquired and each pixel’s 2D spectral data are identified, a hyperspectral image is created; as a result, the origin of each spectrum in the area of interest may be identified [74]. Further, without the need for scanning, three-dimensional hypercubes can be quickly captured using single-shot hyperspectral imagers [75]. Researchers have used a variety of HSI approaches, including filter-based, whisk-broom, push-broom, and snapshot sensors [76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93]. High-resolution spectral data can be obtained using push-broom hyperspectral sensors; however, the scanning acquisition design of these sensors makes it more difficult to produce geometrically correct mosaics from several hyperspectral swaths [94]. In generating spatial data with a pixel-oriented hyperspectral sensor, whisk-broom scanning is helpful; nevertheless, because of the longer measurement length, it poses problems when it is used in outdoor cultural heritage situations [95]. The filter-based approach uses optical filters to capture spectrum information by means of wavelength-coded imaging [96]. Wu et al. developed a novel HSI technique known as snapshot hyperspectral volumetric microscopy, which collects data in five dimensions for the analysis of biological components [97]. HSI has seen recent use in various sectors, including medical diagnostics [98,99,100,101,102], environmental monitoring [103,104,105,106,107], forestry [108,109,110,111,112], mining and geology [113,114,115,116,117], remote sensing [118,119,120,121,122], agriculture [123,124,125,126,127], counterfeit detection [128,129,130,131,132], the military [133,134,135,136,137], and astronomy [138,139,140,141,142]. HSI has been particularly useful in the field of medical diagnostics in the recognition and diagnosis of various kinds of cancer. The most recent studies regarding the application of HSI for cancer diagnosis are presented in Table 1.
Figure 1 presents an illustrative diagram of the key elements regarding HNC. This study examines recent research concerning HNC diagnosis and detection using HSI technology combined with CAD methods. It assesses how effective these combined methods are in detecting and diagnosing HNC, with an emphasis on their sensitivity, specificity, and accuracy. Additionally, it gives a brief overview of the research and makes suggestions based on a meta-analysis of several CAD methodologies.

2. Materials and Methods

This section describes the methods used to gather studies relevant to this review, especially those dealing with the use of HSI technology in the detection and diagnosis of HNC. It outlines the criteria for including and excluding studies to ensure the selection of relevant research.

2.1. Study Selection Criteria

The objective of this review was to analyze advancements in HNC diagnosis and detection using HSI techniques, concentrating on studies that fulfilled the following established criteria for inclusion: providing clear numerical outcomes, such as datasets, accuracy, sensitivity, and specificity; focusing on HNC detection using HSI; published between 2015 and 2024; published in journals with an H-index exceeding 75 and falling within either the first quartile (Q1) or second quartile (Q2); employing either prospective or retrospective methodologies; and written in English. Furthermore, this research excluded studies that met the following criteria for exclusion: lacking adequate information; and falling under any of the categories of stories, remarks, proceedings, study protocols, systematic reviews, meta-analyses, and conference contributions. QUADAS-2 was used as a method for evaluating the scientific merit of the papers under examination [156,157,158]. There are four sections to this method: “patient selection”, “index test”, “reference standard”, and “flow and timing”. Furthermore, an evaluation of “applicability” is included in the first three sections. There are three categories for each component: high, low, and uncertain risk of bias [159]. Different researchers have used the QUADAS-2 tool to check whether studies match with real-time application conditions [160,161,162,163]. Sounderajah et al. conducted a quality review of AI-centered DTA studies and found that many of the studies did not tackle the study topic, instead focusing on the application of AI algorithms to predict treatment success, metastasis, recurrence, or disease prognosis [164]. Adeoye et al. produced evaluation designs to validate non-invasive biomarkers for HNC detection using the QUADAS-2 tool, finding that numerous potentially valuable biomarkers for HNC had not been thoroughly assessed using the most precise techniques available [165]. Therefore, in this study, two authors (G.G. and A.M.) performed QUADAS-2 analyses on several papers related to HNC detection using HSI and found that 7 of the 1030 papers available complied with the inclusion and exclusion criteria (see Supplementary Materials, Section 1, for further details on the inclusion and exclusion criteria).

2.2. QUADAS-2 Results

The results of the QUADAS-2 analyses of the seven studies examined in this review are presented in Table 2. Specific criteria, such as patient selection, the index test, the reference standard, flow and timing, and the possibility of bias, were used to evaluate each study. The supplemental study contains further information on the quality analysis, inclusion and exclusion criteria, and other related topics.

3. Results

This section presents the findings of the review, detailing the clinical features observed in each study and providing a concise explanation. It includes the numerical results obtained from each study and provides comparisons of these results in relation to sensitivity, specificity, and accuracy.

3.1. Clinical Features Observed in the Studies

The studies selected for this review analyzed different CAD methods for detecting and diagnosing HNC. Each study is briefly described herein, emphasizing its goals, the CAD algorithm utilized, and the outcomes. Furthermore, the accuracy, sensitivity, and specificity of HNC detection and classification in each study are presented using subgrouping.
In 2019, Halicek et al. used a CNN to detect the cancer margins of HNSCC. In total, 102 patients were included in this study. The accuracy rates in their findings varied depending on the anatomical site within the oral cavity, ranging from 61 ± 7% for the tongue to 95 ± 4% for the oropharynx. Variability was also observed in the sensitivity and specificity. For example, in the hypopharynx, the sensitivity was 20 ± 14%, while the specificity was an impressive 99 ± 1%. The imaging was performed in vivo, including a variety of locations such as the tongue, using a wavelength range of 450–900 nm and a resolution of 5 nm. Eggert et al. (2021) demonstrated the use of deep learning algorithms combined with HSI for the in vivo diagnosis of HNC. This study applied a 3D-CNN and was mostly concerned with oropharyngeal, hypopharyngeal, and laryngeal tumors, obtaining an overall accuracy of 81%, sensitivity of 83%, and specificity of 79%. The spectral range that was imaged was 390–680 nm, and the spectral resolution was 10 nm. In vivo data collection from 98 participants was performed for this investigation.
Halicek et al. (2017) used HSI to classify HNC. In this study, a CNN was trained on both SCCa and thyroid cancer. For SCCa, an accuracy of 74%, sensitivity of 67%, and specificity of 67% were obtained, while for thyroid cancer, an accuracy of 67%, sensitivity of 67%, and specificity of 67% were obtained. This study was conducted ex vivo with a wavelength range of 450–900 nm and resolution of 5 nm. Ma et al. (2022) used a CNN to detect HNSCC on the larynx, hypopharynx, buccal mucosa, and floor of the mouth, resulting in accuracy, sensitivity, and specificity values of 82%, 72%, and 93%, respectively. Twenty patients were examined in this study, and the experiments were conducted ex vivo in the spectral range of 470–720 nm, with a resolution of 3 nm. Halicek et al. (2020) detected malignancies in the salivary and thyroid glands using deep learning algorithms and HSI. Their CNN was able to identify salivary gland tumors with 82 ± 8% accuracy, 72 ± 11% sensitivity, and 82 ± 11% specificity. These tests were carried out ex vivo on 82 thyroid tissue samples and an undefined number of salivary glands. Petzborn et al. (2022) investigated the intraoperative evaluation of tumor margins in tissue slices using HSI and ML approaches. In this study, an SVM detected oral cancer with accuracy, sensitivity, and specificity values of 76%, 48%, and 89%, respectively, in the wavelength range of 500–1000 nm and with a resolution of 2 nm. This study included seven patients, and the experiment was conducted ex vivo.
In 2017, Fei et al. completed a pilot study using label-free reflectance HSI for tumor margin evaluations utilizing surgical tissues from cancer patients. This study encompassed the oral cavity and thyroid regions. By employing LDA, they achieved promising results. For the tumor margin assessment in the oral cavity, the LDA model exhibited 90%, 89%, and 91% accuracy, sensitivity, and specificity, respectively, within the wavelength range of 450–950 nm and with a 2 nm resolution. The LDA model performed even better in the thyroid region, with accuracy, sensitivity, and specificity values of 94%, 94%, and 95%, respectively. These tests were carried out in vivo on 16 surgical specimens.
Table 3 below illustrates the clinical characteristics of the selected studies on HNC detection and diagnosis. All seven journal manuscripts included in this review specified the number of patients involved in the study. Additionally, these studies offered full details on the methodology used to collect data from hyperspectral images. In total, 375 patients were involved in the investigations. The selected papers focused on wavelengths in the regions of visible and near-infrared light. The most commonly utilized CAD algorithm in the experiments was CNN, followed by SVM and LDA. In all the investigations, the highest accuracy (95%) was obtained for oropharyngeal cancer, while the lowest accuracy (42%) was obtained for hypopharyngeal cancer. With regards to the total number of patients across all the studies, Halicek et al. 2019 examined a large number of patients (27.2%), whereas Pertzborn et al. examined a small number (1.8%). Additionally, Fei et al. discovered greater levels of sensitivity and specificity for thyroid cancer than for other cancer types. Pertzborn et al. used a greater wavelength range than was used in previous research. Three experiments were conducted in vivo, while four were conducted ex vivo.

3.2. Meta-Analysis of the Studies

The average accuracy, sensitivity, and specificity values for the seven studies examined in this review were 77%, 68%, and 80%, respectively. Research conducted in vivo showed significantly greater accuracy (81%) and sensitivity (83%) but lower specificity (79%), as opposed to research conducted ex vivo, which demonstrated lower accuracy (79%) and sensitivity (65%) but higher specificity (91%). In total, 216 patients were involved in the in vivo studies, whereas 159 patients were involved in the ex vivo studies. Table 4 presents the averages obtained via a meta-analysis of all the studies and HNC subgroups.
ML and AI techniques have significantly improved healthcare, particularly in areas such as computer-aided diagnosis, retrieval, and analysis, and medical image processing [173]. In the meta-analysis, LDA was found to be the most effective CAD method for HNC diagnosis and detection, with 92% accuracy, 91% sensitivity, and 93% specificity. CNNs performed moderately, with 82% accuracy, 77% sensitivity, and 86% specificity, showing competence but slightly lower sensitivity when compared to LDA; despite this, CNNs were most commonly used in the selected studies. SVM had the lowest accuracy and sensitivity, 76% and 48%, respectively, but reasonably high specificity of 89%. Although LDA, CNNs, and SVMs are all AI approaches used to diagnose HNC, they have distinct properties. CNNs are used to process images and extract features [174]. LDA is able to project high-dimensional pattern samples to the ideal discriminant vector space, allowing it to extract classification information and reduce feature space dimensions [175]. SVMs are valued for their versatility and robustness in high-dimensional spaces [176]. LDA was employed in only one trial, whereas CNNs were used in five; nevertheless, the meta-analysis results demonstrated that LDA was superior to CNNs in HNC detection.
Every type of cancer has unique risk factors and causes, but there are certain anatomical similarities, as well as shared risk factors like alcoholism, smoking, and HPV infection [177,178,179,180,181,182,183]. In particular, oral, tongue, and laryngeal cancers share similar risk factors and anatomical proximity [184,185], while others like thyroid cancer diverge due to endocrine-related factors [186,187,188,189,190]. With an accuracy of 86%, sensitivity of 87%, and specificity of 84%, the detection of thyroid cancer exhibited the best overall performance characteristics. Conversely, hypopharynx cancer detection exhibited a notable specificity of 99%, despite its lower sensitivity of 20% and accuracy of 42%, which was the lowest accuracy observed among the cancer types. Max. sinus cancer detection exhibited the second-lowest accuracy of 58% but the highest sensitivity of 93% among the cancer types. The lowest specificity was found for max. sinus cancer (52%) and tongue cancer (53%).
Studies that used a spectral range in the visible (VIS) band exhibited better performance metrics, with accuracy, sensitivity, and specificity values of 81.5%, 77.5%, and 86%, respectively, as compared to those that used the visible and near-infrared (VNIR) band, which exhibited an average accuracy of 75.6%, sensitivity of 69.5%, and specificity of 76.07%. Even though the VIS band showed better results, it was used in only two studies, whereas the VNIR band was used in five.
The highest average accuracy of 86%, with a sensitivity of 87% and a specificity of 86%, was shown by two studies that were published between 2015 and 2017. Conversely, the three studies that were published between 2018 and 2021 demonstrated a slightly lower average accuracy of 81%, along with a sensitivity of 83% and a specificity of 79%. However, the two studies published between 2022 and 2024 showed a decrease in average accuracy to 79% and a substantial decrease in sensitivity to 60%, despite their high specificity of 91%. These findings show a potential temporal variance in diagnostic study outcomes, with earlier studies generating higher overall performance metrics than more recent ones, underscoring the necessity for ongoing research and improvement in head and neck cancer diagnosis.

3.3. Subgroup Meta-Analysis

The quantitative findings from this meta-analysis of HNC cancer research were organized graphically in a Deek’s funnel plot and forest plots. In Figure 2, forest plots display the sensitivity, specificity, and accuracy values for each CAD technique, experiment type (in/ex vivo), cancer type, spectral band, and publication year (see Supplementary Materials S3 and S5). These analyses were performed at the 95% confidence level (Supplementary Materials Section S5 presents the ANOVA tables for the different subgroups). The average accuracy, specificity, and sensitivity values for the data used in each categorization were used to determine the line of no effect in this investigation. Low-performance data are defined as data that cross the line of no effect on the left-hand side. However, data that do not cross over the line of no effect are considered to perform well. The meta-analyses of the accuracy, sensitivity, and specificity showed that the experiment type (in/ex vivo), spectral band, method, and year of publication intersected with the line of no effect, indicating a high chance of comparable performance. In contrast, the cancer type subgroup showed deviations from the line of no effect, suggesting potential differences in diagnostic accuracy among anatomical regions.
A Deek’s funnel plot was created using a variety of classifications, including the CAD method, spectral band, experiment type (in/ex vivo), anatomical location, and year of publication (see Supplementary Materials S4 and S6). The diagnostic odds ratio for each anatomical location and the fraction of the root of each sample size are displayed in the Deeks’ funnel plot in Figure 3 [191,192,193,194]. The Deek’s funnel plot obtained in this investigation did not indicate the presence of heterogeneity. The regression line for the anatomical location was found to be at approximately 12. The max. sinus, hypopharynx, and thyroid exhibited greater standard error values than other anatomical locations. Halicek et al. (2019) used HSI to detect SCC in different locations, including the oral cavity, larynx, nasal cavity, oropharynx, hypopharynx, and max. sinus, using the same dataset. A smaller dataset was found for salivary cancer compared to other cancer types in the investigation by Halicek et al. (2020).
A graph of the accuracy values in the examined research for the identification of the various HNC types was created to compare them graphically and is shown in Figure 4. This figure depicts the overall accuracy values for the various cancer types detected in the investigations. The most widely detected type was oral cancer. Oropharyngeal cancer was the most accurately detected among all the cancer types, with 95% accuracy, followed by thyroid cancer at 94%. In Halicek et al.’s (2019) study, hypopharyngeal carcinoma had the lowest accuracy of all, at 42%.

4. Discussion

HSI techniques can be applied as mesoscopic or microscopic imaging methods to monitor and analyze anatomical features on different scales, ranging from cells to tissues. Many studies have proven the capacity of HSI as a disease diagnostic tool in various tissues, including those of the head and neck [51,195,196,197,198,199,200]. Although preclinical and clinical research has shown the potential of HSI approaches, there are still significant challenges to overcome before HSI may be effectively used in clinical settings [201]. One of the challenges facing HSI is its sensitivity to external elements that might affect the quality and dependability of the collected data. These factors include atmospheric conditions, variations in lighting, and the distance between the imaging system and the object [202]. Various methods can be employed to mitigate the effects of external factors on HSI, particularly lighting conditions. These include standardized lighting setups, white and dark reference calibration before image acquisition, and post-processing algorithms to normalize spectral data. Reflectance-based approaches can also be used to reduce the impact of lighting inconsistencies. These techniques collectively enhance the reliability of HSI in different experimental and clinical environments, addressing its susceptibility to external influences. In addition, researchers need to address the challenges of obtaining high-resolution high-speed image collections at video rates when HSI is used in medical applications [203]. Target organ, tissue, cell, and molecular biomarker imaging during surgery would be made easier by real-time acquisition. Furthermore, with more spectral and spatial information, more subtle differences in the spectral and spatial characteristics of different tissue types may be captured by finer spectral and spatial resolution and a larger library of tissue spectra [204]. Wang et al. created a unique hybrid imaging system that takes advantage of the panchromatic camera’s high light throughput, along with the excellent spectral and spatial resolution of the CASSI, to capture 4D high-speed films [205]. The exclusion of the important NIR range (1000–1900 nm), which contains valuable information on lipids, water, amines, and amino acids, is a limitation of the studies reviewed herein. This omission is due to the technical constraints of the HSI systems employed in these studies, which typically capture data in the VIS range and a less informative portion of the NIR range (700–1100 nm). Additionally, most studies focus on the clinical applicability of HSI within the capabilities of existing diagnostic tools, leading to the exclusion of this spectral range.
Future studies should focus on how difficult it is to automatically analyze data of high spatial and spectral dimension [206]. Another challenge is building a large spectrum database that includes a range of tissue types, from ocular to epidermal and subcutaneous tissue, as well as important molecular biomarkers. A database like this would make it possible to distinguish between different types of tissue, like the bile tube and the surrounding fatty tissue, as well as between oxygenated and deoxygenated blood. The large volume of data generated during HSI affects storage and processing, making HSI too time-consuming to use in particular situations [207]. In addition to this, the cost of HSI equipment can be a barrier to accessibility, restricting its availability to particular users. Future research could focus on validating these techniques on larger, more diverse datasets, which would improve their generalizability. Expanding the datasets to include a wider variety of HNC cases and conditions would provide a better understanding of how these algorithms perform in different clinical scenarios. Moreover, future studies could benefit from multi-center collaborations to collect comprehensive HSI datasets for cross-validation, helping to enhance the robustness and clinical applicability of the CAD techniques.
To overcome these obstacles, a variety of techniques and tools are available. A dataset’s dimensionality can be reduced using dimensionality reduction techniques, which can also be used to manage bands with high correlations [208] in many molecular component blends [209]. The spectral data fusion method is also used for the visualization of HSI image data [210]. Using the spectral scanning method, relatively stable imaging environments might be achieved by progressively capturing spatial images at various wavelengths without requiring the optical system to move mechanically [201]. The snapshot technique has also been developed throughout the years to preserve image speed while enhancing the spectral and spatial resolution [211,212,213], while the immense complexity of HSI data has led to the development of several image analysis techniques, including spectral unmix, statistical analysis, spectral angle mapper, and PCA.

5. Conclusions

The studies reviewed herein provide essential insights into the application of HSI and CAD methods for the detection and diagnosis of HNC. Various CAD techniques, including CNNs, SVMs, and LDA, were employed in the studies, and their performance was evaluated across in vivo and ex vivo experiments. LDA demonstrated the most consistent results, with one study achieving 94% accuracy, sensitivity, and specificity in detecting thyroid cancer. In comparison, studies using CNNs showed varying results depending on the cancer type and anatomical site. For instance, the accuracy ranged from 61% for the tongue to 95% for the oropharynx. The sensitivity in specific locations, such as the hypopharynx, was particularly low (20%), despite a high specificity of 99%. This variability indicates that while CNNs perform well for some anatomical sites, further refinement is needed to ensure consistency across all HNC types. SVMs also showed mixed results. One study reported high specificity (89%) but low sensitivity (48%), suggesting that while the SVM effectively reduces false positives, it may miss cancerous lesions, limiting its diagnostic utility in some cases. Overall, in vivo studies outperformed ex vivo experiments, demonstrating higher accuracy (81%), sensitivity (83%), and specificity (79%). This underscores the importance of real-time imaging, where external variables can be more effectively controlled. The studies also revealed that oropharyngeal cancer detection achieved the highest accuracy (95%), while hypopharyngeal cancer detection achieved the lowest (42%). Differences in spectral ranges and resolutions impacted the results, with broader wavelength ranges and higher resolutions contributing to better detection rates. One study utilizing a spectral range of 500–1000 nm and a 2-nm resolution achieved more comprehensive results in tumor margin detection. Despite these promising results, challenges remain, including HSI’s sensitivity to external factors, the need for high-resolution imaging, and the variability in dataset sizes. Future research should focus on larger datasets, more advanced machine learning models, and improved real-time imaging capabilities to make HSI more clinically applicable for HNC diagnosis.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/biomedicines12102315/s1, Figure S1. Flowchart based on diagnostic cohort study of HNC diagnosis, Figure S2. QUADAS-2 results, Figure S3. Forest plots, Figure S4. Deek’s funnel plot, Table S1. Computations for forest plot, Table S2. Summary of computations for Deek’s funnel plot, Table S3. Summary of computations for Deek’s funnel plot.

Author Contributions

Conceptualization, A.M., R.K., C.-C.W. and I.-C.W.; data curation, I.-C.W., Y.-C.C. and R.K.; formal analysis, I.-C.W., Y.-C.C. and G.G.; funding acquisition, Y.-C.C., A.M. and H.-C.W.; investigation, R.K. and G.G.; methodology, R.K., C.-C.W. and Y.-C.C.; project administration, A.M. and H.-C.W.; resources, A.M., C.-C.W. and H.-C.W.; software, R.K. and G.G.; supervision, H.-C.W.; writing—original draft, G.G.; writing—review and editing, G.G. and H.-C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by grants from the National Science and Technology Council (NSTC 113-2221-E-194-011-MY3, 112-2314-B-037-084-MY3), Kaohsiung Medical University Research Center Grant (KMU-TC112B04), and Kaohsiung Medical University Hospital (SI11203, KMUH-DK (C)-113001) in Taiwan.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available upon reasonable request to the corresponding author (H.-C.W.).

Conflicts of Interest

The authors declare no conflicts of interests.

References

  1. Mehanna, H.; Paleri, V.; West, C.; Nutting, C. Head and neck cancer—Part 1: Epidemiology, presentation, and prevention. BMJ 2010, 341, c4684. [Google Scholar] [CrossRef] [PubMed]
  2. Johnson, D.E.; Burtness, B.; Leemans, C.R.; Lui, V.W.Y.; Bauman, J.E.; Grandis, J.R. Head and neck squamous cell carcinoma. Nat. Rev. Dis. Primers 2020, 6, 92. [Google Scholar] [CrossRef] [PubMed]
  3. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA A Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  4. Davies, L.; Welch, H.G. Epidemiology of head and neck cancer in the United States. Otolaryngol.—Head Neck Surg. 2006, 135, 451–457. [Google Scholar] [CrossRef] [PubMed]
  5. Hashim, D.; Sartori, S.; Vecchia, C.L.; Serraino, D.; Maso, L.D.; Negri, E.; Smith, E.; Levi, F.; Boccia, S.; Cadoni, G. Hormone factors play a favorable role in female head and neck cancer risk. Cancer Med. 2017, 6, 1998–2007. [Google Scholar] [CrossRef] [PubMed]
  6. Dhull, A.K.; Atri, R.; Dhankhar, R.; Chauhan, A.K.; Kaushal, V. Major risk factors in head and neck cancer: A retrospective analysis of 12-year experiences. World J. Oncol. 2018, 9, 80. [Google Scholar] [CrossRef]
  7. Duffy, S.A.; Ronis, D.L.; McLean, S.; Fowler, K.E.; Gruber, S.B.; Wolf, G.T.; Terrell, J.E. Pretreatment health behaviors predict survival among patients with head and neck squamous cell carcinoma. J. Clin. Oncol. 2009, 27, 1969. [Google Scholar] [CrossRef]
  8. Mayne, S.T.; Cartmel, B.; Kirsh, V.; Goodwin Jr, W.J. Alcohol and tobacco use prediagnosis and postdiagnosis, and survival in a cohort of patients with early stage cancers of the oral cavity, pharynx, and larynx. Cancer Epidemiol. Biomark. Prev. 2009, 18, 3368–3374. [Google Scholar] [CrossRef]
  9. Warren, G.W.; Kasza, K.A.; Reid, M.E.; Cummings, K.M.; Marshall, J.R. Smoking at diagnosis and survival in cancer patients. Int. J. Cancer 2013, 132, 401–410. [Google Scholar] [CrossRef]
  10. Sharp, L.; McDevitt, J.; Carsin, A.-E.; Brown, C.; Comber, H. Smoking at diagnosis is an independent prognostic factor for cancer-specific survival in head and neck cancer: Findings from a large, population-based study. Cancer Epidemiol. Biomark. Prev. 2014, 23, 2579–2590. [Google Scholar] [CrossRef]
  11. Peterson, L.A.; Bellile, E.L.; Wolf, G.T.; Virani, S.; Shuman, A.G.; Taylor, J.M.; Rozek, L.S.; University of Michigan Head and Neck Specialized Program of Research Excellence Program. Cigarette use, comorbidities, and prognosis in a prospective head and neck squamous cell carcinoma population. Head Neck 2016, 38, 1810–1820. [Google Scholar] [CrossRef]
  12. Boffetta, P.; Merletti, F.; Faggiano, F.; Migliaretti, G.; Ferro, G.; Zanetti, R.; Terracini, B. Prognostic factors and survival of laryngeal cancer patients from Turin, Italy: A population-based study. Am. J. Epidemiol. 1997, 145, 1100–1105. [Google Scholar] [CrossRef]
  13. Hafkamp, H.C.; Manni, J.; Haesevoets, A.; Voogd, A.; Schepers, M.; Bot, F.; Hopman, A.; Ramaekers, F.; Speel, E.J.M. Marked differences in survival rate between smokers and nonsmokers with HPV 16-associated tonsillar carcinomas. Int. J. Cancer 2008, 122, 2656–2664. [Google Scholar] [CrossRef] [PubMed]
  14. Fortin, A.; Wang, C.S.; Vigneault, É. Influence of smoking and alcohol drinking behaviors on treatment outcomes of patients with squamous cell carcinomas of the head and neck. Int. J. Radiat. Oncol. Biol. Phys. 2009, 74, 1062–1069. [Google Scholar] [CrossRef] [PubMed]
  15. Elrefaey, S.; Massaro, M.; Chiocca, S.; Chiesa, F.; Ansarin, M. HPV in oropharyngeal cancer: The basics to know in clinical practice. Acta Otorhinolaryngol. Ital. 2014, 34, 299. [Google Scholar] [PubMed]
  16. Bloching, M.; Reich, W.; Schubert, J.; Grummt, T.; Sandner, A. The influence of oral hygiene on salivary quality in the Ames Test, as a marker for genotoxic effects. Oral Oncol. 2007, 43, 933–939. [Google Scholar] [CrossRef]
  17. Negri, E.; Boffetta, P.; Berthiller, J.; Castellsague, X.; Curado, M.P.; Maso, L.D.; Daudt, A.W.; Fabianova, E.; Fernandez, L.; Wünsch-Filho, V. Family history of cancer: Pooled analysis in the International Head and Neck Cancer Epidemiology Consortium. Int. J. Cancer 2009, 124, 394–401. [Google Scholar] [CrossRef]
  18. Gaudet, M.M.; Olshan, A.F.; Chuang, S.-C.; Berthiller, J.; Zhang, Z.-F.; Lissowska, J.; Zaridze, D.; Winn, D.M.; Wei, Q.; Talamini, R. Body mass index and risk of head and neck cancer in a pooled analysis of case–control studies in the International Head and Neck Cancer Epidemiology (INHANCE) Consortium. Int. J. Epidemiol. 2010, 39, 1091–1102. [Google Scholar] [CrossRef]
  19. Molina, M.A.; Cheung, M.C.; Perez, E.A.; Byrne, M.M.; Franceschi, D.; Moffat, F.L.; Livingstone, A.S.; Goodwin, W.J.; Gutierrez, J.C.; Koniaris, L.G. African American and poor patients have a dramatically worse prognosis for head and neck cancer: An examination of 20,915 patients. Cancer 2008, 113, 2797–2806. [Google Scholar] [CrossRef]
  20. Creaney, G.; McMahon, A.D.; Ross, A.J.; Bhatti, L.A.; Paterson, C.; Conway, D.I. Head and neck cancer in the UK: What was the stage before COVID-19? UK cancer registries analysis (2011–2018). Br. Dent. J. 2022, 233, 787–793. [Google Scholar] [CrossRef]
  21. Pai, S.I.; Westra, W.H. Molecular pathology of head and neck cancer: Implications for diagnosis, prognosis, and treatment. Annu. Rev. Pathol. Mech. Dis. 2009, 4, 49–70. [Google Scholar] [CrossRef] [PubMed]
  22. Bassani, S.; Santonicco, N.; Eccher, A.; Scarpa, A.; Vianini, M.; Brunelli, M.; Bisi, N.; Nocini, R.; Sacchetto, L.; Munari, E. Artificial intelligence in head and neck cancer diagnosis. J. Pathol. Inform. 2022, 13, 100153. [Google Scholar] [CrossRef] [PubMed]
  23. Chinnery, T.; Arifin, A.; Tay, K.Y.; Leung, A.; Nichols, A.C.; Palma, D.A.; Mattonen, S.A.; Lang, P. Utilizing artificial intelligence for head and neck cancer outcomes prediction from imaging. Can. Assoc. Radiol. J. 2021, 72, 73–85. [Google Scholar] [CrossRef]
  24. Razek, A.A.K.A.; Khaled, R.; Helmy, E.; Naglah, A.; AbdelKhalek, A.; El-Baz, A. Artificial intelligence and deep learning of head and neck cancer. Magn. Reson. Imaging Clin. 2022, 30, 81–94. [Google Scholar] [CrossRef] [PubMed]
  25. Kearney, V.; Chan, J.W.; Valdes, G.; Solberg, T.D.; Yom, S.S. The application of artificial intelligence in the IMRT planning process for head and neck cancer. Oral Oncol. 2018, 87, 111–116. [Google Scholar] [CrossRef] [PubMed]
  26. Fh, T.; Cyw, C.; Eyw, C. Radiomics AI prediction for head and neck squamous cell carcinoma (HNSCC) prognosis and recurrence with target volume approach. BJR Open 2021, 3, 20200073. [Google Scholar] [CrossRef]
  27. Diamant, A.; Chatterjee, A.; Vallières, M.; Shenouda, G.; Seuntjens, J. Deep learning in head & neck cancer outcome prediction. Sci. Rep. 2019, 9, 2764. [Google Scholar]
  28. Howard, F.M.; Kochanny, S.; Koshy, M.; Spiotto, M.; Pearson, A.T. Machine learning–guided adjuvant treatment of head and neck cancer. JAMA Netw. Open 2020, 3, e2025881. [Google Scholar] [CrossRef]
  29. Zhong, N.-N.; Wang, H.-Q.; Huang, X.-Y.; Li, Z.-Z.; Cao, L.-M.; Huo, F.-Y.; Liu, B.; Bu, L.-L. Enhancing head and neck tumor management with artificial intelligence: Integration and perspectives. Semin. Cancer Biol. 2023, 95, 52–74. [Google Scholar] [CrossRef]
  30. Bang, C.; Bernard, G.; Le, W.T.; Lalonde, A.; Kadoury, S.; Bahig, H. Artificial intelligence to predict outcomes of head and neck radiotherapy. Clin. Transl. Radiat. Oncol. 2023, 39, 100590. [Google Scholar] [CrossRef]
  31. Thukral, R.; Aggarwal, A.K.; Arora, A.S.; Dora, T.; Sancheti, S. Artificial intelligence-based prediction of oral mucositis in patients with head-and-neck cancer: A prospective observational study utilizing a thermographic approach. Cancer Res. Stat. Treat. 2023, 6, 181–190. [Google Scholar] [CrossRef]
  32. Inaba, A.; Hori, K.; Yoda, Y.; Ikematsu, H.; Takano, H.; Matsuzaki, H.; Watanabe, Y.; Takeshita, N.; Tomioka, T.; Ishii, G. Artificial intelligence system for detecting superficial laryngopharyngeal cancer with high efficiency of deep learning. Head Neck 2020, 42, 2581–2592. [Google Scholar] [CrossRef] [PubMed]
  33. Naser, M.A.; Wahid, K.A.; van Dijk, L.V.; He, R.; Abdelaal, M.A.; Dede, C.; Mohamed, A.S.; Fuller, C.D. Head and neck cancer primary tumor auto segmentation using model ensembling of deep learning in PET/CT images. In 3D Head and Neck Tumor Segmentation in PET/CT Challenge; Springer: Berlin/Heidelberg, Germany, 2021; Volume 13209, pp. 121–133. [Google Scholar]
  34. Kourou, K.; Exarchos, T.P.; Exarchos, K.P.; Karamouzis, M.V.; Fotiadis, D.I. Machine learning applications in cancer prognosis and prediction. Comput. Struct. Biotechnol. J. 2015, 13, 8–17. [Google Scholar] [CrossRef]
  35. Bera, K.; Schalper, K.A.; Rimm, D.L.; Velcheti, V.; Madabhushi, A. Artificial intelligence in digital pathology—New tools for diagnosis and precision oncology. Nat. Rev. Clin. Oncol. 2019, 16, 703–715. [Google Scholar] [CrossRef]
  36. Wang, S.; Yang, D.M.; Rong, R.; Zhan, X.; Fujimoto, J.; Liu, H.; Minna, J.; Wistuba, I.I.; Xie, Y.; Xiao, G. Artificial intelligence in lung cancer pathology image analysis. Cancers 2019, 11, 1673. [Google Scholar] [CrossRef]
  37. Bejnordi, B.E.; Veta, M.; Van Diest, P.J.; Van Ginneken, B.; Karssemeijer, N.; Litjens, G.; Van Der Laak, J.A.; Hermsen, M.; Manson, Q.F.; Balkenhol, M. Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer. JAMA 2017, 318, 2199–2210. [Google Scholar] [CrossRef]
  38. Stephens, K. Near-Infrared Imaging and Machine Learning Can Identify Hidden Tumors. AXIS Imaging News 2021. [Google Scholar]
  39. Manhas, J.; Gupta, R.K.; Roy, P.P. A review on automated cancer detection in medical images using machine learning and deep learning based computational techniques: Challenges and opportunities. Arch. Comput. Methods Eng. 2022, 29, 2893–2933. [Google Scholar] [CrossRef]
  40. Meyer-Veit, F.; Rayyes, R.; Gerstner, A.O.; Steil, J. Hyperspectral endoscopy using deep learning for laryngeal cancer segmentation. In Proceedings of the International Conference on Artificial Neural Networks, Bristol, UK, 6–9 September 2022; pp. 682–694. [Google Scholar]
  41. de Lucena, D.V.; da Silva Soares, A.; Coelho, C.J.; Wastowski, I.J.; Filho, A.R.G. Detection of tumoral epithelial lesions using hyperspectral imaging and deep learning. In Proceedings of the Computational Science–ICCS 2020: 20th International Conference, Amsterdam, The Netherlands, 3–5 June 2020; Proceedings, Part III 20. pp. 599–612. [Google Scholar]
  42. Galvão Filho, A.R.; Wastowski, I.J.; Moreira, M.A.; de PC Cysneiros, M.A.; Coelho, C.J. Pancreatic Cancer Detection Using Hyperspectral Imaging and Machine Learning. In Proceedings of the 2023 IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8–11 October 2023; pp. 2870–2874. [Google Scholar]
  43. Rahman, T.; Mahanta, L.; Chakraborty, C.; Das, A.; Sarma, J. Textural pattern classification for oral squamous cell carcinoma. J. Microsc. 2018, 269, 85–93. [Google Scholar] [CrossRef]
  44. Mookiah, M.; Shah, P.; Chakraborty, C.; Ray, A.K. Brownian motion curve-based textural classification and its application in cancer diagnosis. Anal. Quant. Cytol. Histol. 2011, 33, 158–168. [Google Scholar]
  45. dos Santos, D.F.; de Faria, P.R.; Travencolo, B.A.; do Nascimento, M.Z. Automated detection of tumor regions from oral histological whole slide images using fully convolutional neural networks. Biomed. Signal Process. Control 2021, 69, 102921. [Google Scholar] [CrossRef]
  46. Alabi, R.O.; Youssef, O.; Pirinen, M.; Elmusrati, M.; Mäkitie, A.A.; Leivo, I.; Almangush, A. Machine learning in oral squamous cell carcinoma: Current status, clinical concerns and prospects for future—A systematic review. Artif. Intell. Med. 2021, 115, 102060. [Google Scholar] [CrossRef] [PubMed]
  47. Volpe, S.; Pepa, M.; Zaffaroni, M.; Bellerba, F.; Santamaria, R.; Marvaso, G.; Isaksson, L.J.; Gandini, S.; Starzyńska, A.; Leonardi, M.C. Machine learning for head and neck cancer: A safe bet?—A clinically oriented systematic review for the radiation oncologist. Front. Oncol. 2021, 11, 772663. [Google Scholar] [CrossRef] [PubMed]
  48. Mubarak, H.K.; Zhou, X.; Palsgrove, D.; Sumer, B.D.; Chen, A.Y.; Fei, B. An ensemble learning method for detection of head and neck squamous cell carcinoma using polarized hyperspectral microscopic imaging. In Proceedings of the Medical Imaging 2024: Digital and Computational Pathology, Edinburgh, UK, 16–19 September 2024; pp. 169–178. [Google Scholar]
  49. Dixit, S.; Kumar, A.; Srinivasan, K. A Current Review of Machine Learning and Deep Learning Models in Oral Cancer Diagnosis: Recent Technologies, Open Challenges, and Future Research Directions. Diagnostics 2023, 13, 1353. [Google Scholar] [CrossRef]
  50. Lin, P.-Y.; Cheng, P.-C.; Hsu, W.-L.; Lo, W.-C.; Hsieh, C.-H.; Shueng, P.-W.; Liao, L.-J. Risk of CVD following radiotherapy for head and neck cancer: An updated systematic review and meta-analysis. Front. Oncol. 2022, 12, 820808. [Google Scholar] [CrossRef]
  51. Young, K.; Ma, E.; Kejriwal, S.; Nielsen, T.; Aulakh, S.S.; Birkeland, A.C. Intraoperative in vivo imaging modalities in head and neck cancer surgical margin delineation: A systematic review. Cancers 2022, 14, 3416. [Google Scholar] [CrossRef]
  52. Wang, Y.; Guo, Y.; Lu, J.; Sun, Y.; Yu, X.; Gopinath, S.C.; Lakshmipriya, T.; Wu, Y.S.; Wang, C. Nanodetection of head and neck cancer on titanium oxide sensing surface. Nanoscale Res. Lett. 2020, 15, 33. [Google Scholar] [CrossRef]
  53. Vohra, P.; Ngo, H.; Lee, W.; Vo-Dinh, T. Squamous cell carcinoma DNA detection using ultrabright SERS nanorattles and magnetic beads for head and neck cancer molecular diagnostics. Anal. Methods 2017, 9, 5550–5556. [Google Scholar] [CrossRef]
  54. Soares, A.C.; Soares, J.C.; Rodrigues, V.C.; Follmann, H.D.M.; Arantes, L.M.R.B.; Carvalho, A.C.; Melendez, M.E.; Fregnani, J.H.T.; Reis, R.M.; Carvalho, A.L. Microfluidic-based genosensor to detect human papillomavirus (HPV16) for head and neck cancer. ACS Appl. Mater. Interfaces 2018, 10, 36757–36763. [Google Scholar] [CrossRef]
  55. Carr, O.; Raymundo-Pereira, P.A.; Shimizu, F.M.; Sorroche, B.P.; Melendez, M.E.; de Oliveira Pedro, R.; Miranda, P.B.; Carvalho, A.L.; Reis, R.M.; Arantes, L.M. Genosensor made with a self-assembled monolayer matrix to detect MGMT gene methylation in head and neck cancer cell lines. Talanta 2020, 210, 120609. [Google Scholar] [CrossRef]
  56. Farnesi, E.; Rinaldi, S.; Liu, C.; Ballmaier, J.; Guntinas-Lichius, O.; Schmitt, M.; Cialla-May, D.; Popp, J. Label-Free SERS and MD Analysis of Biomarkers for Rapid Point-of-Care Sensors Detecting Head and Neck Cancer and Infections. Sensors 2023, 23, 8915. [Google Scholar] [CrossRef] [PubMed]
  57. Liu, Y.; Cheng, H.-D.; Huang, J.; Zhang, Y.; Tang, X.; Tian, J.-W.; Wang, Y. Computer aided diagnosis system for breast cancer based on color Doppler flow imaging. J. Med. Syst. 2012, 36, 3975–3982. [Google Scholar] [CrossRef] [PubMed]
  58. Halicek, M.; Fabelo, H.; Ortega, S.; Callico, G.M.; Fei, B. In-vivo and ex-vivo tissue analysis through hyperspectral imaging techniques: Revealing the invisible features of cancer. Cancers 2019, 11, 756. [Google Scholar] [CrossRef] [PubMed]
  59. Bueno, G.; González, R.; Déniz, O.; González, J.; García-Rojo, M. Colour model analysis for microscopic image processing. Diagn. Pathol. 2008, 3, S18. [Google Scholar] [CrossRef] [PubMed]
  60. Drukker, K.; Gruszauskas, N.P.; Sennett, C.A.; Giger, M.L. Breast US computer-aided diagnosis workstation: Performance with a large clinical diagnostic population. Radiology 2008, 248, 392–397. [Google Scholar] [CrossRef]
  61. Acharya, U.R.; Sree, S.V.; Krishnan, M.M.R.; Molinari, F.; Garberoglio, R.; Suri, J.S. Non-invasive automated 3D thyroid lesion classification in ultrasound: A class of ThyroScan™ systems. Ultrasonics 2012, 52, 508–520. [Google Scholar] [CrossRef]
  62. Chang, Y.; Paul, A.K.; Kim, N.; Baek, J.H.; Choi, Y.J.; Ha, E.J.; Lee, K.D.; Lee, H.S.; Shin, D.; Kim, N. Computer-aided diagnosis for classifying benign versus malignant thyroid nodules based on ultrasound images: A comparison with radiologist-based assessments. Med. Phys. 2016, 43, 554–567. [Google Scholar] [CrossRef]
  63. Rey-Barroso, L.; Peña-Gutiérrez, S.; Yáñez, C.; Burgos-Fernández, F.J.; Vilaseca, M.; Royo, S. Optical technologies for the improvement of skin cancer diagnosis: A review. Sensors 2021, 21, 252. [Google Scholar] [CrossRef]
  64. Neal, R.M. Bayesian Learning for Neural Networks; Springer Science & Business Media: Berlin, Germany, 2012; Volume 118. [Google Scholar]
  65. Saba, T. Computer vision for microscopic skin cancer diagnosis using handcrafted and non-handcrafted features. Microsc. Res. Tech. 2021, 84, 1272–1283. [Google Scholar] [CrossRef]
  66. Premaladha, J.; Ravichandran, K. Novel approaches for diagnosing melanoma skin lesions through supervised and deep learning algorithms. J. Med. Syst. 2016, 40, 96. [Google Scholar] [CrossRef]
  67. Afifi, S.; GholamHosseini, H.; Sinha, R. SVM classifier on chip for melanoma detection. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Republic of Korea, 11–15 July 2017; pp. 270–274. [Google Scholar]
  68. Thiem, D.G.; Römer, P.; Blatt, S.; Al-Nawas, B.; Kämmerer, P.W. New approach to the old challenge of free flap monitoring—Hyperspectral imaging outperforms clinical assessment by earlier detection of perfusion failure. J. Pers. Med. 2021, 11, 1101. [Google Scholar] [CrossRef] [PubMed]
  69. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  70. Yamamoto, S.; Tsumura, N.; Ogawa-Ochiai, K.; Nakaguchi, T.; Kasahara, Y.; Namiki, T.; Miyake, Y. Early detection of disease-oriented state from hyperspectral tongue images with principal component analysis and vector rotation. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; pp. 3025–3028. [Google Scholar]
  71. Martin, M.E.; Wabuyele, M.B.; Chen, K.; Kasili, P.; Panjehpour, M.; Phan, M.; Overholt, B.; Cunningham, G.; Wilson, D.; DeNovo, R.C. Development of an advanced hyperspectral imaging (HSI) system with applications for cancer detection. Ann. Biomed. Eng. 2006, 34, 1061–1068. [Google Scholar] [CrossRef] [PubMed]
  72. Ghamisi, P.; Rasti, B.; Yokoya, N.; Wang, Q.; Hofle, B.; Bruzzone, L.; Bovolo, F.; Chi, M.; Anders, K.; Gloaguen, R. Multisource and multitemporal data fusion in remote sensing: A comprehensive review of the state of the art. IEEE Geosci. Remote Sens. Mag. 2019, 7, 6–39. [Google Scholar] [CrossRef]
  73. Goetz, A.F. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
  74. Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep learning meets hyperspectral image analysis: A multidisciplinary review. J. Imaging 2019, 5, 52. [Google Scholar] [CrossRef]
  75. Qin, J. Hyperspectral imaging instruments. In Hyperspectral Imaging for Food Quality Analysis and Control; Elsevier: Amsterdam, The Netherlands, 2010; pp. 129–172. [Google Scholar]
  76. Bassler, M.C.; Stefanakis, M.; Sequeira, I.; Ostertag, E.; Wagner, A.; Bartsch, J.W.; Roeßler, M.; Mandic, R.; Reddmann, E.F.; Lorenz, A. Comparison of Whiskbroom and Pushbroom darkfield elastic light scattering spectroscopic imaging for head and neck cancer identification in a mouse model. Anal. Bioanal. Chem. 2021, 413, 7363–7383. [Google Scholar] [CrossRef]
  77. Mouroulis, P.; Green, R.O.; Chrien, T.G. Design of pushbroom imaging spectrometers for optimum recovery of spectroscopic and spatial information. Appl. Opt. 2000, 39, 2210–2220. [Google Scholar] [CrossRef]
  78. Dong, J.; Duan, Y.; Zhao, X.; Zhou, Q. Abwi Airborne Binocular Whiskbroom Imager: Camera Principles and the Workflow. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 48, 7–13. [Google Scholar] [CrossRef]
  79. Nayak, D.K.; Bhagvati, C. A new HSI based filtering technique for impulse noise removal in images. In Proceedings of the 2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG), Jodhpur, Rajasthan, 18–21 December 2013; pp. 1–5. [Google Scholar]
  80. Pan, B.; Shi, Z.; Xu, X. Hierarchical guidance filtering-based ensemble classification for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4177–4189. [Google Scholar] [CrossRef]
  81. Lim, H.-T.; Murukeshan, V.M. A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications. Sci. Rep. 2016, 6, 24044. [Google Scholar] [CrossRef] [PubMed]
  82. Pust, O.; Fabricius, H. 3D and snapshot hyperspectral cameras based on continuously variable filters. In Proceedings of the 2018 9th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, The Netherlands, 23–26 September 2018; pp. 1–4. [Google Scholar]
  83. Aswathy, C.; Sowmya, V.; Soman, K. Hyperspectral image denoising using low pass sparse banded filter matrix for improved sparsity based classification. Procedia Comput. Sci. 2015, 58, 26–33. [Google Scholar] [CrossRef]
  84. Chen, Z.; Jiang, J.; Jiang, X.; Fang, X.; Cai, Z. Spectral-spatial feature extraction of hyperspectral images based on propagation filter. Sensors 2018, 18, 1978. [Google Scholar] [CrossRef] [PubMed]
  85. Lee, S.; Namgoong, J.-M.; Kim, Y.; Cha, J.; Kim, J.K. Multimodal imaging of laser speckle contrast imaging combined with mosaic filter-based hyperspectral imaging for precise surgical guidance. IEEE Trans. Biomed. Eng. 2021, 69, 443–452. [Google Scholar] [CrossRef] [PubMed]
  86. Yang, L.; Li, X.; Yu, J.; Jiang, L.; Su, X.; Chen, F. Thin-Plate-Spline-based Registration for Correcting Local Non-linear Deformation of Bidirectional Whisk-broom Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 13332–13345. [Google Scholar] [CrossRef]
  87. Jia, Y.; Luo, Z. Improved Iterative Prediction Reconstruction for Compressive Whiskbroom Imaging. In Proceedings of the 2016 6th International Conference on Machinery, Materials, Environment, Biotechnology and Computer, Tianjin, China, 11–12 June 2016; pp. 1666–1670. [Google Scholar]
  88. Aboras, M.; Amasha, H.; Ibraheem, I. Early detection of melanoma using multispectral imaging and artificial intelligence techniques. Am. J. Biomed. Life Sci. 2015, 3, 29–33. [Google Scholar]
  89. Wiener, R.S. Low-dose computed tomography screening for lung cancer. Ann. Intern. Med. 2015, 162, 460. [Google Scholar] [CrossRef]
  90. Høye, G.; Løke, T.; Fridman, A. Method for quantifying image quality in push-broom hyperspectral cameras. Opt. Eng. 2015, 54, 053102. [Google Scholar] [CrossRef]
  91. Jiang, Y.; Li, C. Detection and discrimination of cotton foreign matter using push-broom based hyperspectral imaging: System design and capability. PLoS ONE 2015, 10, e0121969. [Google Scholar] [CrossRef]
  92. Sousa, J.J.; Toscano, P.; Matese, A.; Di Gennaro, S.F.; Berton, A.; Gatti, M.; Poni, S.; Pádua, L.; Hruška, J.; Morais, R. UAV-Based Hyperspectral Monitoring Using Push-Broom and Snapshot Sensors: A Multisite Assessment for Precision Viticulture Applications. Sensors 2022, 22, 6574. [Google Scholar] [CrossRef]
  93. Barreto, M.A.P.; Johansen, K.; Angel, Y.; McCabe, M.F. Radiometric assessment of a UAV-based push-broom hyperspectral camera. Sensors 2019, 19, 4699. [Google Scholar] [CrossRef] [PubMed]
  94. Jurado, J.M.; Pádua, L.; Hruška, J.; Feito, F.R.; Sousa, J.J. An efficient method for generating UAV-based hyperspectral mosaics using push-broom sensors. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6515–6531. [Google Scholar]
  95. Funatomi, T.; Ogawa, T.; Tanaka, K.; Kubo, H.; Caron, G.; Mouaddib, E.M.; Matsushita, Y.; Mukaigawa, Y. Eliminating temporal illumination variations in whisk-broom hyperspectral imaging. Int. J. Comput. Vis. 2022, 130, 1310–1324. [Google Scholar] [CrossRef]
  96. Huang, L.; Luo, R.; Liu, X.; Hao, X. Spectral imaging with deep learning. Light Sci. Appl. 2022, 11, 61. [Google Scholar] [CrossRef]
  97. Wu, J.; Xiong, B.; Lin, X.; He, J.; Suo, J.; Dai, Q. Snapshot hyperspectral volumetric microscopy. Sci. Rep. 2016, 6, 24624. [Google Scholar] [CrossRef]
  98. Fei, B. Hyperspectral imaging in medical applications. In Data Handling in Science and Technology; Elsevier: Amsterdam, The Netherlands, 2019; Volume 32, pp. 523–565. [Google Scholar]
  99. Karim, S.; Qadir, A.; Farooq, U.; Shakir, M.; Laghari, A.A. Hyperspectral imaging: A review and trends towards medical imaging. Curr. Med. Imaging 2023, 19, 417–427. [Google Scholar] [CrossRef]
  100. Fang, Y.-J.; Mukundan, A.; Tsao, Y.-M.; Huang, C.-W.; Wang, H.-C. Identification of early esophageal cancer by semantic segmentation. J. Pers. Med. 2022, 12, 1204. [Google Scholar] [CrossRef]
  101. Wahabzada, M.; Besser, M.; Khosravani, M.; Kuska, M.T.; Kersting, K.; Mahlein, A.-K.; Stürmer, E. Monitoring wound healing in a 3D wound model by hyperspectral imaging and efficient clustering. PLoS ONE 2017, 12, e0186425. [Google Scholar] [CrossRef]
  102. Dietrich, M.; Seidlitz, S.; Schreck, N.; Wiesenfarth, M.; Godau, P.; Tizabi, M.; Sellner, J.; Marx, S.; Knödler, S.; Allers, M.M. Machine learning-based analysis of hyperspectral images for automated sepsis diagnosis. arXiv 2021, arXiv:2106.08445. [Google Scholar]
  103. Rajabi, R.; Zehtabian, A.; Singh, K.D.; Tabatabaeenejad, A.; Ghamisi, P.; Homayouni, S. Hyperspectral imaging in environmental monitoring and analysis. Front. Environ. Sci. 2024, 11, 1353447. [Google Scholar] [CrossRef]
  104. Flores, H.; Lorenz, S.; Jackisch, R.; Tusa, L.; Contreras, I.C.; Zimmermann, R.; Gloaguen, R. UAS-based hyperspectral environmental monitoring of acid mine drainage affected waters. Minerals 2021, 11, 182. [Google Scholar] [CrossRef]
  105. Chen, C.-W.; Tseng, Y.-S.; Mukundan, A.; Wang, H.-C. Air pollution: Sensitive detection of PM2. 5 and PM10 concentration using hyperspectral imaging. Appl. Sci. 2021, 11, 4543. [Google Scholar] [CrossRef]
  106. Stuart, M.B.; McGonigle, A.J.; Willmott, J.R. Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems. Sensors 2019, 19, 3071. [Google Scholar] [CrossRef] [PubMed]
  107. Tang, Y.; Song, S.; Gui, S.; Chao, W.; Cheng, C.; Qin, R. Active and low-cost hyperspectral imaging for the spectral analysis of a low-light environment. Sensors 2023, 23, 1437. [Google Scholar] [CrossRef] [PubMed]
  108. Dhakal, A.; Ooba, M.; Hayashi, K. Assessing impacts of forest conversion on terrestrial vertebrates combining forestry cost with HSI and InVEST: Case of Toyota city, Japan. Int. J. Biodivers. Sci. Ecosyst. Serv. Manag. 2014, 10, 198–215. [Google Scholar] [CrossRef]
  109. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral imaging: A review on UAV-based sensors, data processing and applications for agriculture and forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef]
  110. Fardusi, M.J.; Chianucci, F.; Barbati, A. Concept to practice of geospatial-information tools to assist forest management and planning under precision forestry framework: A review. Ann. Silvic. Res 2017, 41, 3–14. [Google Scholar]
  111. Jusoff, K. Precision forestry using airborne hyperspectral imaging sensor. J. Agric. Sci. 2009, 1, 142. [Google Scholar] [CrossRef]
  112. Hycza, T.; Stereńczak, K.; Bałazy, R. Potential use of hyperspectral data to classify forest tree species. N. Z. J. For. Sci. 2018, 48, 18. [Google Scholar] [CrossRef]
  113. Shukla, A.; Kot, R. An Overview of Hyperspectral Remote Sensing and its applications in various Disciplines. IRA Int. J. Appl. Sci 2016, 5, 85–90. [Google Scholar] [CrossRef]
  114. Rouskov, K.; Popov, K.; Stoykov, S.; Yamaguchi, Y. Some applications of the remote sensing in geology by using of ASTER images. In Proceedings of the Scientific Conference “SPACE, ECOLOGY, SAFETY” with International Participation, Varna, Bulgaria, 10–13 June 2005; pp. 167–173. [Google Scholar]
  115. Bedini, E. The use of hyperspectral remote sensing for mineral exploration: A review. J. Hyperspectral Remote Sens. 2017, 7, 189–211. [Google Scholar] [CrossRef]
  116. Ye, B.; Tian, S.; Cheng, Q.; Ge, Y. Application of lithological mapping based on advanced hyperspectral imager (AHSI) imagery onboard Gaofen-5 (GF-5) satellite. Remote Sens. 2020, 12, 3990. [Google Scholar] [CrossRef]
  117. Chiu, J.K.; Selen, L.; Koerting, F. Potential Applications of Hyperspectral Imaging on Weak Rock Degradation Studies in Engineering Geology. In Proceedings of the Geo-Congress, Los Angeles, CA, USA, 26–29 March 2023; pp. 11–21. [Google Scholar]
  118. Ramakrishnan, D.; Bharti, R. Hyperspectral remote sensing and geological applications. Curr. Sci. 2015, 108, 879–891. [Google Scholar]
  119. Zhang, C.; Li, G.; Du, S. Multi-scale dense networks for hyperspectral remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9201–9222. [Google Scholar] [CrossRef]
  120. Xu, X.; Li, W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2017, 56, 937–949. [Google Scholar] [CrossRef]
  121. Plaza, A.; Du, Q.; Chang, Y.-L.; King, R.L. High performance computing for hyperspectral remote sensing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2011, 4, 528–544. [Google Scholar] [CrossRef]
  122. Pu, R. Hyperspectral Remote Sensing: Fundamentals and Practices; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  123. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  124. Dale, L.M.; Thewis, A.; Boudry, C.; Rotar, I.; Dardenne, P.; Baeten, V.; Pierna, J.A.F. Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: A review. Appl. Spectrosc. Rev. 2013, 48, 142–159. [Google Scholar] [CrossRef]
  125. Tsuchikawa, S.; Ma, T.; Inagaki, T. Application of near-infrared spectroscopy to agriculture and forestry. Anal. Sci. 2022, 38, 635–642. [Google Scholar] [CrossRef]
  126. Wang, C.; Liu, B.; Liu, L.; Zhu, Y.; Hou, J.; Liu, P.; Li, X. A review of deep learning used in the hyperspectral image analysis for agriculture. Artif. Intell. Rev. 2021, 54, 5205–5253. [Google Scholar] [CrossRef]
  127. Teke, M.; Deveci, H.S.; Haliloğlu, O.; Gürbüz, S.Z.; Sakarya, U. A short survey of hyperspectral remote sensing applications in agriculture. In Proceedings of the 2013 6th International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 12–14 June 2013; pp. 171–176. [Google Scholar]
  128. Huang, S.-Y.; Mukundan, A.; Tsao, Y.-M.; Kim, Y.; Lin, F.-C.; Wang, H.-C. Recent advances in counterfeit art, document, photo, hologram, and currency detection using hyperspectral imaging. Sensors 2022, 22, 7308. [Google Scholar] [CrossRef] [PubMed]
  129. Baek, S.; Choi, E.; Baek, Y.; Lee, C. Detection of counterfeit banknotes using multispectral images. Digit. Signal Process. 2018, 78, 294–304. [Google Scholar] [CrossRef]
  130. Mukundan, A.; Tsao, Y.-M.; Cheng, W.-M.; Lin, F.-C.; Wang, H.-C. Automatic Counterfeit Currency Detection Using a Novel Snapshot Hyperspectral Imaging Algorithm. Sensors 2023, 23, 2026. [Google Scholar] [CrossRef] [PubMed]
  131. Wu, Y.; Li, X.; Xu, L.; Fan, R.; Lin, Y.; Zhan, C.; Kang, Z. Counterfeit detection of bulk Baijiu based on fluorescence hyperspectral technology and machine learning. J. Food Meas. Charact. 2024, 18, 3032–3041. [Google Scholar] [CrossRef]
  132. Ismail, H. Hyperspectral Imaging for Detecting Counterfeit Currency and Forensic Applications; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  133. Makki, I.; Younes, R.; Francis, C.; Bianchi, T.; Zucchetti, M. A survey of landmine detection using hyperspectral imaging. ISPRS J. Photogramm. Remote Sens. 2017, 124, 40–53. [Google Scholar] [CrossRef]
  134. Shimoni, M.; Haelterman, R.; Perneel, C. Hypersectral imaging for military and security applications: Combining myriad processing and sensing techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
  135. Ke, C. Military object detection using multiple information extracted from hyperspectral imagery. In Proceedings of the 2017 International Conference on Progress in Informatics and Computing (PIC), Nanjing, China, 27–29 October 2017; pp. 124–128. [Google Scholar]
  136. Stein, D.; Schoonmaker, J.; Coolbaugh, E. Hyperspectral imaging for intelligence, surveillance, and reconnaissance. In Space and Naval Systems Warfare Center (SSC) San Diego Biennial Review; SSC: San Diego, CA, USA, 2001; p. 108116. [Google Scholar]
  137. Tiwari, K.C.; Arora, M.K.; Singh, D. An assessment of independent component analysis for detection of military targets from hyperspectral images. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 730–740. [Google Scholar] [CrossRef]
  138. Mukundan, A.; Wang, H.-C. Simplified approach to detect satellite maneuvers using TLE data and simplified perturbation model utilizing orbital element variation. Appl. Sci. 2021, 11, 10181. [Google Scholar] [CrossRef]
  139. Pisani, M.; Zucco, M. Simple and cheap hyperspectral imaging for astronomy (and more). In Proceedings of the Unconventional Optical Imaging, Strasbourg, France, 22–26 April 2018; pp. 25–32. [Google Scholar]
  140. Courbot, J.-B.; Mazet, V.; Monfrini, E.; Collet, C. Pairwise Markov fields for segmentation in astronomical hyperspectral images. Signal Process. 2019, 163, 41–48. [Google Scholar] [CrossRef]
  141. Vasile, M.; Walker, L.; Dunphy, R.D.; Zabalza, J.; Murray, P.; Marshall, S.; Savitski, V. Intelligent characterisation of space objects with hyperspectral imaging. Acta Astronaut. 2023, 203, 510–534. [Google Scholar] [CrossRef]
  142. Lodhi, V.; Chakravarty, D.; Mitra, P. Hyperspectral imaging for earth observation: Platforms and instruments. J. Indian Inst. Sci. 2018, 98, 429–443. [Google Scholar] [CrossRef]
  143. Muniz, F.B.; Baffa, M.d.F.O.; Garcia, S.B.; Bachmann, L.; Felipe, J.C. Histopathological diagnosis of colon cancer using micro-FTIR hyperspectral imaging and deep learning. Comput. Methods Programs Biomed. 2023, 231, 107388. [Google Scholar] [CrossRef] [PubMed]
  144. Kiyotoki, S.; Nishikawa, J.; Okamoto, T.; Hamabe, K.; Saito, M.; Goto, A.; Fujita, Y.; Hamamoto, Y.; Takeuchi, Y.; Satori, S. New method for detection of gastric cancer by hyperspectral imaging: A pilot study. J. Biomed. Opt. 2013, 18, 026010. [Google Scholar] [CrossRef] [PubMed]
  145. Ortega, S.; Halicek, M.; Fabelo, H.; Guerra, R.; Lopez, C.; Lejeune, M.; Godtliebsen, F.; Callico, G.M.; Fei, B. Hyperspectral imaging and deep learning for the detection of breast cancer cells in digitized histological images. In Proceedings of the Medical Imaging 2020: Digital Pathology, Houston, TX, USA, 19–20 February 2020; pp. 206–214. [Google Scholar]
  146. Leon, R.; Martinez-Vega, B.; Fabelo, H.; Ortega, S.; Melian, V.; Castaño, I.; Carretero, G.; Almeida, P.; Garcia, A.; Quevedo, E. Non-invasive skin cancer diagnosis using hyperspectral imaging for in-situ clinical support. J. Clin. Med. 2020, 9, 1662. [Google Scholar] [CrossRef]
  147. Yao, H.-Y.; Tseng, K.-W.; Nguyen, H.-T.; Kuo, C.-T.; Wang, H.-C. Hyperspectral ophthalmoscope images for the diagnosis of diabetic retinopathy stage. J. Clin. Med. 2020, 9, 1613. [Google Scholar] [CrossRef]
  148. Zherdeva, L.A.; Bratchenko, I.A.; Alonova, M.V.; Myakinin, O.O.; Artemyev, D.N.; Moryatov, A.A.; Kozlov, S.V.; Zakharov, V.P. Hyperspectral imaging of skin and lung cancers. In Proceedings of the Biophotonics: Photonic Solutions for Better Health Care V, Brussels, Belgium, 4–7 April 2016; pp. 25–34. [Google Scholar]
  149. Akbari, H.; Halig, L.V.; Schuster, D.M.; Osunkoya, A.; Master, V.; Nieh, P.T.; Chen, G.Z.; Fei, B. Hyperspectral imaging and quantitative analysis for prostate cancer detection. J. Biomed. Opt. 2012, 17, 076005. [Google Scholar] [CrossRef]
  150. Jansen-Winkeln, B.; Barberio, M.; Chalopin, C.; Schierle, K.; Diana, M.; Köhler, H.; Gockel, I.; Maktabi, M. Feedforward artificial neural network-based colorectal Cancer detection using Hyperspectral imaging: A step towards automatic optical biopsy. Cancers 2021, 13, 967. [Google Scholar] [CrossRef]
  151. Notarstefano, V.; Sabbatini, S.; Conti, C.; Pisani, M.; Astolfi, P.; Pro, C.; Rubini, C.; Vaccari, L.; Giorgini, E. Investigation of human pancreatic cancer tissues by Fourier Transform Infrared Hyperspectral Imaging. J. Biophotonics 2020, 13, e201960071. [Google Scholar] [CrossRef]
  152. Wang, J.; Li, Q. Quantitative analysis of liver tumors at different stages using microscopic hyperspectral imaging technology. J. Biomed. Opt. 2018, 23, 106002. [Google Scholar] [CrossRef]
  153. Fabelo, H.; Ortega, S.; Ravi, D.; Kiran, B.R.; Sosa, C.; Bulters, D.; Callicó, G.M.; Bulstrode, H.; Szolna, A.; Piñeiro, J.F. Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations. PLoS ONE 2018, 13, e0193721. [Google Scholar] [CrossRef]
  154. Pérez, S.; Van de Berg, N.; Manni, F.; Lai, M.; Rijstenberg, L.; Hendriks, B.; Dankelman, J.; Ewing-Graham, P.; Nieuwenhuyzen-de Boer, G.; Van Beekhuizen, H. 303 Hyperspectral imaging for tissue classification after ovarian cancer surgery. Int. J. Gynecol. Cancer 2021, 31, A156.3–A157. [Google Scholar]
  155. Tsai, T.-J.; Mukundan, A.; Chi, Y.-S.; Tsao, Y.-M.; Wang, Y.-K.; Chen, T.-H.; Wu, I.-C.; Huang, C.-W.; Wang, H.-C. Intelligent identification of early esophageal cancer by band-selective hyperspectral imaging. Cancers 2022, 14, 4292. [Google Scholar] [CrossRef] [PubMed]
  156. Whiting, P.F.; Rutjes, A.W.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.; Sterne, J.A.; Bossuyt, P.M.; QUADAS-2 Group. QUADAS-2: A revised tool for the quality assessment of diagnostic accuracy studies. Ann. Intern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef] [PubMed]
  157. Yang, B.; Mallett, S.; Takwoingi, Y.; Davenport, C.F.; Hyde, C.J.; Whiting, P.F.; Deeks, J.J.; Leeflang, M.M.; QUADAS-C Group. QUADAS-C: A tool for assessing risk of bias in comparative diagnostic accuracy studies. Ann. Intern. Med. 2021, 174, 1592–1599. [Google Scholar] [CrossRef] [PubMed]
  158. Cook, C.; Cleland, J.; Huijbregts, P. Creation and critique of studies of diagnostic accuracy: Use of the STARD and QUADAS methodological quality assessment tools. J. Man. Manip. Ther. 2007, 15, 93–102. [Google Scholar] [CrossRef] [PubMed]
  159. Schueler, S.; Schuetz, G.M.; Dewey, M. The revised QUADAS-2 tool. Ann. Intern. Med. 2012, 156, 323. [Google Scholar] [CrossRef]
  160. Venazzi, A.; Swardfager, W.; Lam, B.; Siqueira, J.d.O.; Herrmann, N.; Cogo-Moreira, H. Validity of the QUADAS-2 in assessing risk of bias in Alzheimer’s disease diagnostic accuracy studies. Front. Psychiatry 2018, 9, 221. [Google Scholar] [CrossRef]
  161. Zieliński, G.; Zięba, E.; Byś, A. Review of studies on the impact of climbing as a complementary form of depression treatment and their evaluation according to the QUADAS-2 tool. Psychiatr. Pol. 2021, 55, 1341–1356. [Google Scholar] [CrossRef]
  162. Leeflang, M.M. Quality assessment in systematic reviews comparing the diagnostic accuracy of multiple tests. Clin. Microbiol. Infect. 2022, 28, 629–630. [Google Scholar] [CrossRef]
  163. Podstawka, Z.; Zieliński, G.; Gawda, P. Psychosomatic aspects of motion and seasickness-a literature review and evaluation according to QUADAS-2 tool. J. Educ. Health Sport 2020, 10, 49–61. [Google Scholar] [CrossRef]
  164. Sounderajah, V.; Ashrafian, H.; Rose, S.; Shah, N.H.; Ghassemi, M.; Golub, R.; Kahn Jr, C.E.; Esteva, A.; Karthikesalingam, A.; Mateen, B. A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nat. Med. 2021, 27, 1663–1665. [Google Scholar] [CrossRef] [PubMed]
  165. Adeoye, J.; Wan, C.C.J.; Thomson, P. An appraisal of pivotal evaluation designs in validating noninvasive biomarkers for head and neck cancer detection. Acta Oncol. 2020, 59, 1500–1502. [Google Scholar] [CrossRef] [PubMed]
  166. Halicek, M.; Dormer, J.D.; Little, J.V.; Chen, A.Y.; Myers, L.; Sumer, B.D.; Fei, B. Hyperspectral imaging of head and neck squamous cell carcinoma for cancer margin detection in surgical specimens from 102 patients using deep learning. Cancers 2019, 11, 1367. [Google Scholar] [CrossRef] [PubMed]
  167. Eggert, D.; Bengs, M.; Westermann, S.; Gessert, N.; Gerstner, A.O.; Mueller, N.A.; Bewarder, J.; Schlaefer, A.; Betz, C.; Laffers, W. In vivo detection of head and neck tumors by hyperspectral imaging combined with deep learning methods. J. Biophotonics 2022, 15, e202100167. [Google Scholar] [CrossRef]
  168. Halicek, M.; Lu, G.; Little, J.V.; Wang, X.; Patel, M.; Griffith, C.C.; El-Deiry, M.W.; Chen, A.Y.; Fei, B. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J. Biomed. Opt. 2017, 22, 060503. [Google Scholar] [CrossRef]
  169. Ma, L.; Little, J.V.; Chen, A.Y.; Myers, L.; Sumer, B.D.; Fei, B. Automatic detection of head and neck squamous cell carcinoma on histologic slides using hyperspectral microscopic imaging. J. Biomed. Opt. 2022, 27, 046501. [Google Scholar]
  170. Halicek, M.; Dormer, J.D.; Little, J.V.; Chen, A.Y.; Fei, B. Tumor detection of the thyroid and salivary glands using hyperspectral imaging and deep learning. Biomed. Opt. Express 2020, 11, 1383–1400. [Google Scholar] [CrossRef]
  171. Pertzborn, D.; Nguyen, H.-N.; Hüttmann, K.; Prengel, J.; Ernst, G.; Guntinas-Lichius, O.; von Eggeling, F.; Hoffmann, F. Intraoperative assessment of tumor margins in tissue sections with hyperspectral imaging and machine learning. Cancers 2022, 15, 213. [Google Scholar] [CrossRef]
  172. Fei, B.; Lu, G.; Wang, X.; Zhang, H.; Little, J.V.; Patel, M.R.; Griffith, C.C.; El-Diery, M.W.; Chen, A.Y. Label-free reflectance hyperspectral imaging for tumor margin assessment: A pilot study on surgical specimens of cancer patients. J. Biomed. Opt. 2017, 22, 086009. [Google Scholar] [CrossRef]
  173. Razzak, M.I.; Naz, S.; Zaib, A. Deep learning for medical image processing: Overview, challenges and the future. In Classification in BioApps: Automation of Decision Making; Springer: Berlin/Heidelberg, Germany, 2018; pp. 323–350. [Google Scholar]
  174. Zhang, Y.; Wang, Q.; Li, Y.; Wu, X. Sentiment Classification Based on Piecewise Pooling Convolutional Neural Network. Comput. Mater. Contin. 2018, 56, 285–297. [Google Scholar]
  175. Liu, J.; Song, S.; Sun, G.; Fu, Y. Classification of ECG arrhythmia using CNN, SVM and LDA. In Proceedings of the International Conference on Artificial Intelligence and Security, Dublin, Ireland, 16–18 October 2019; pp. 191–201. [Google Scholar]
  176. Shi, J.; Zhang, Z.; Li, Y.; Wang, R.; Shi, H.; Li, X. New method for computer identification through electromagnetic radiation. CMC Comput. Mater. Contin. 2018, 57, 69–80. [Google Scholar] [CrossRef]
  177. Johnson, N.W.; Gupta, B.; Speicher, D.J.; Ray, C.S.; Shaikh, M.H.; Al-Hebshi, N.; Gupta, P.C. Etiology and risk factors. In Oral and Oropharyngeal Cancer; CRC Press: Boca Raton, FL, USA, 2018; pp. 19–94. [Google Scholar]
  178. Ragin, C.; Modugno, F.; Gollin, S. The epidemiology and risk factors of head and neck cancer: A focus on human papillomavirus. J. Dent. Res. 2007, 86, 104–114. [Google Scholar] [CrossRef] [PubMed]
  179. Kumar, R.; Rai, A.K.; Das, D.; Das, R.; Kumar, R.S.; Sarma, A.; Sharma, S.; Kataki, A.C.; Ramteke, A. Alcohol and tobacco increases risk of high risk HPV infection in head and neck cancer patients: Study from North-East Region of India. PLoS ONE 2015, 10, e0140700. [Google Scholar] [CrossRef] [PubMed]
  180. Conway, D.I.; Purkayastha, M.; Chestnutt, I. The changing epidemiology of oral cancer: Definitions, trends, and risk factors. Br. Dent. J. 2018, 225, 867–873. [Google Scholar] [CrossRef]
  181. Grulich, A.E.; Jin, F.; Conway, E.L.; Stein, A.N.; Hocking, J. Cancers attributable to human papillomavirus infection. Sex. Health 2010, 7, 244–252. [Google Scholar] [CrossRef]
  182. Berman, T.A.; Schiller, J.T. Human papillomavirus in cervical cancer and oropharyngeal cancer: One cause, two diseases. Cancer 2017, 123, 2219–2229. [Google Scholar] [CrossRef]
  183. Zandberg, D.P.; Bhargava, R.; Badin, S.; Cullen, K.J. The role of human papillomavirus in nongenital cancers. CA A Cancer J. Clin. 2013, 63, 57–81. [Google Scholar] [CrossRef]
  184. Gillison, M.L. Current topics in the epidemiology of oral cavity and oropharyngeal cancers. Head Neck: J. Sci. Spec. Head Neck 2007, 29, 779–792. [Google Scholar] [CrossRef]
  185. Piagkou, M.; Demesticha, T.; Chrissanthou, I.; Lappas, D.; Piagkos, G.; Mazarakis, A.; Skandalakis, P. Tongue Anatomy and Oral Cancer: The Route of Metastasis. In Tongue: Anatomy, Kinematics and Diseases; Nova Biomedical: Runcorn, UK, 2012; pp. 209–220. [Google Scholar]
  186. Wang, X.; Peng, W.; Li, C.; Qin, R.; Zhong, Z.; Sun, C. Identification of an immune-related signature indicating the dedifferentiation of thyroid cells. Cancer Cell Int. 2021, 21, 231. [Google Scholar] [CrossRef]
  187. Pellegriti, G.; Frasca, F.; Regalbuto, C.; Squatrito, S.; Vigneri, R. Worldwide increasing incidence of thyroid cancer: Update on epidemiology and risk factors. J. Cancer Epidemiol. 2013, 2013, 965212. [Google Scholar] [CrossRef]
  188. Wang, J.; Yu, F.; Shang, Y.; Ping, Z.; Liu, L. Thyroid cancer: Incidence and mortality trends in China, 2005–2015. Endocrine 2020, 68, 163–173. [Google Scholar] [CrossRef] [PubMed]
  189. Kim, J.; Gosnell, J.E.; Roman, S.A. Geographic influences in the global rise of thyroid cancer. Nat. Rev. Endocrinol. 2020, 16, 17–29. [Google Scholar] [CrossRef] [PubMed]
  190. Roman, B.R.; Morris, L.G.; Davies, L. The thyroid cancer epidemic, 2017 perspective. Curr. Opin. Endocrinol. Diabetes Obes. 2017, 24, 332–336. [Google Scholar] [CrossRef] [PubMed]
  191. Man, W.; Hu, J.; Zhao, Z.; Zhang, M.; Wang, T.; Lin, J.; Duan, Y.; Wang, L.; Wang, H.; Sun, D. Diagnostic performance of instantaneous wave-free ratio for the evaluation of coronary stenosis severity confirmed by fractional flow reserve: A PRISMA-compliant meta-analysis of randomized studies. Medicine 2016, 95, e4774. [Google Scholar] [CrossRef]
  192. Yan, H.; Ma, F.; Zhang, Y.; Wang, C.; Qiu, D.; Zhou, K.; Hua, Y.; Li, Y. miRNAs as biomarkers for diagnosis of heart failure: A systematic review and meta-analysis. Medicine 2017, 96, e6825. [Google Scholar] [CrossRef]
  193. Zhang, M.; Min, Z.; Rana, N.; Liu, H. Accuracy of magnetic resonance imaging in grading knee chondral defects. Arthrosc. J. Arthrosc. Relat. Surg. 2013, 29, 349–356. [Google Scholar] [CrossRef]
  194. Zhu, W.; Zeng, N.; Wang, N. Sensitivity, specificity, accuracy, associated confidence interval and ROC analysis with practical SAS implementations. In Proceedings of the NESUG Proceedings: Health Care and Life Sciences, Baltimore, MD, USA, 14–17 November 2010; Volume 19, p. 67. [Google Scholar]
  195. Zhang, X.; Hsi, W.C.; Yang, F.; Wang, Z.; Sheng, Y.; Sun, J.; Yang, C.; Zhou, R. Development of an isocentric rotating chair positioner to treat patients of head and neck cancer at upright seated position with multiple nonplanar fields in a fixed carbon-ion beamline. Med. Phys. 2020, 47, 2450–2460. [Google Scholar] [CrossRef]
  196. Loperfido, A.; Celebrini, A.; Marzetti, A.; Bellocchi, G. Current role of artificial intelligence in head and neck cancer surgery: A systematic review of literature. Explor. Target. Anti-Tumor Ther. 2023, 4, 933. [Google Scholar] [CrossRef]
  197. Deepti; Ray, S. A review on application of machine learning and deep learning algorithms in head and neck cancer prediction and prognosis. In Data Management, Analytics and Innovation: Proceedings of ICDMAI 2021; Springer: Berlin/Heidelberg, Germany, 2021; Volume 1, pp. 59–73. [Google Scholar]
  198. Tama, B.A.; Kim, G.; Kim, S.W.; Lee, S. Recent advances in the application of artificial intelligence in otorhinolaryngology-head and neck surgery. Clin. Exp. Otorhinolaryngol. 2020, 13, 326. [Google Scholar] [CrossRef]
  199. López, F.; Mäkitie, A.; de Bree, R.; Franchi, A.; de Graaf, P.; Hernández-Prera, J.C.; Strojan, P.; Zidar, N.; Strojan Fležar, M.; Rodrigo, J.P. Qualitative and quantitative diagnosis in head and neck cancer. Diagnostics 2021, 11, 1526. [Google Scholar] [CrossRef]
  200. Gerstner, A.; Martin, R.; Westermann, S.; Mahlein, A.; Schmidt, K.; Thies, B.; Laffers, W. Hyperspectral imaging in head & neck oncology. Laryngo-Rhino-Otologie 2013, 92, 453–457. [Google Scholar] [PubMed]
  201. Yoon, J. Hyperspectral imaging for clinical applications. BioChip J. 2022, 16, 1–12. [Google Scholar] [CrossRef]
  202. Geladi, P.; Burger, J.; Lestander, T. Hyperspectral imaging: Calibration problems and solutions. Chemom. Intell. Lab. Syst. 2004, 72, 209–217. [Google Scholar] [CrossRef]
  203. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef]
  204. Cui, R.; Yu, H.; Xu, T.; Xing, X.; Cao, X.; Yan, K.; Chen, J. Deep learning in medical hyperspectral images: A review. Sensors 2022, 22, 9790. [Google Scholar] [CrossRef]
  205. Wang, L.; Xiong, Z.; Huang, H.; Shi, G.; Wu, F.; Zeng, W. High-speed hyperspectral video acquisition by combining nyquist and compressive sampling. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 857–870. [Google Scholar] [CrossRef]
  206. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  207. Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef]
  208. Ayesha, S.; Hanif, M.K.; Talib, R. Overview and comparative study of dimensionality reduction techniques for high dimensional data. Inf. Fusion 2020, 59, 44–58. [Google Scholar] [CrossRef]
  209. Zuzak, K.J.; Naik, S.C.; Alexandrakis, G.; Hawkins, D.; Behbehani, K.; Livingston, E. Intraoperative bile duct visualization using near-infrared hyperspectral video imaging. Am. J. Surg. 2008, 195, 491–497. [Google Scholar] [CrossRef]
  210. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  211. Park, J.; Feng, X.; Liang, R.; Gao, L. Snapshot multidimensional photography through active optical mapping. Nat. Commun. 2020, 11, 5602. [Google Scholar] [CrossRef] [PubMed]
  212. Pawlowski, M.E.; Dwight, J.G.; Nguyen, T.-U.; Tkaczyk, T.S. High performance image mapping spectrometer (IMS) for snapshot hyperspectral imaging applications. Opt. Express 2019, 27, 1597–1612. [Google Scholar] [CrossRef] [PubMed]
  213. Hedde, P.N.; Cinco, R.; Malacrida, L.; Kamaid, A.; Gratton, E. Phasor-based hyperspectral snapshot microscopy allows fast imaging of live, three-dimensional tissues for biomedical applications. Commun. Biol. 2021, 4, 721. [Google Scholar] [CrossRef]
Figure 1. An illustrative diagram of key elements relating to head and neck cancer (HNC), including global incidence and mortality rates, a pie chart of HNC types, affected regions, risk factors, and a timeline of the projected increase in the number of cases by 2030.
Figure 1. An illustrative diagram of key elements relating to head and neck cancer (HNC), including global incidence and mortality rates, a pie chart of HNC types, affected regions, risk factors, and a timeline of the projected increase in the number of cases by 2030.
Biomedicines 12 02315 g001
Figure 2. Forest plots of the (a) accuracy, (b) sensitivity, and (c) specificity for different subgroups: cancer type, year of publication, CAD method, spectral band, and experiment type (in/ex vivo).
Figure 2. Forest plots of the (a) accuracy, (b) sensitivity, and (c) specificity for different subgroups: cancer type, year of publication, CAD method, spectral band, and experiment type (in/ex vivo).
Biomedicines 12 02315 g002
Figure 3. Deek’s funnel plot for cancer types.
Figure 3. Deek’s funnel plot for cancer types.
Biomedicines 12 02315 g003
Figure 4. Accuracy plot for cancer types of all the studies chosen in this study Halicek et al. (2019) [166], Eggert et al. (2021) [167], Halicek et al. (2017) [168], Ma et al. (2022) [169], Halicek et al. (2020) [170], Pertzborn et al. (2022) [171], Fei et al. (2017) [172].
Figure 4. Accuracy plot for cancer types of all the studies chosen in this study Halicek et al. (2019) [166], Eggert et al. (2021) [167], Halicek et al. (2017) [168], Ma et al. (2022) [169], Halicek et al. (2020) [170], Pertzborn et al. (2022) [171], Fei et al. (2017) [172].
Biomedicines 12 02315 g004
Table 1. Studies on the application of HSI in the medical field, including the detection of various cancers.
Table 1. Studies on the application of HSI in the medical field, including the detection of various cancers.
YearAuthorsApplicationSpectral Range
2023 [143]Barbosa et al.Colon cancer2500–13,333 nm
2013 [144]Kiyotoki et al.Gastric cancer400–800 nm
2020 [145]Ortega et al.Breast cancer400–1000 nm
2020 [146]Leon et al.Skin cancer450–950 nm
2020 [147]Hsin et al.Diabetic retinopathy380–780 nm
2016 [148]Larisa et al.Lung cancer500–670 nm
2012 [149]Akbari et al.Prostate cancer450–950 nm
2021 [150]Boris et al.Colorectal cancer500–1000 nm
2019 [151]Valentina et al.Pancreatic cancer2500–11,111 nm
2018 [152]Wang et al.Liver cancer550–1000 nm
2018 [153]Fabelo et al.Brain cancer400–1000 nm
2021 [154]Perez et al.Ovarian cancer665–975 nm
2022 [155]Tsai et al.Esophageal cancer380–780 nm
Table 2. QUADAS-2 results for the selected studies.
Table 2. QUADAS-2 results for the selected studies.
StudyRisk of BiasApplicability Concerns
Patient SelectionIndex TestReference StandardFlow and TimingPatient SelectionIndex TestReference Standard
Halicek et al. [166]Biomedicines 12 02315 i001?Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001
Eggert et al. [167]Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001
Halicek et al. [168]Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001
Ma et al. [169]Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001
Halicek et al. [170]Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001
Petzborn et al. [171]Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001
Fei et al. [172]Biomedicines 12 02315 i001?Biomedicines 12 02315 i001Biomedicines 12 02315 i001Biomedicines 12 02315 i001?Biomedicines 12 02315 i001
Biomedicines 12 02315 i001 Low Risk. Biomedicines 12 02315 i002 High Risk. ? Unclear Risk.
Table 3. Clinical classification of the selected studies.
Table 3. Clinical classification of the selected studies.
AuthorsYearCancer TypesMethodAccuracySensitivitySpecificityWavelengthResolutionIn/Ex VivoPatients
Halicek et al. [166]2019Oral cavityCNN63 ± 5%71 ± 8%49 ± 8%450–900 nm5 nmIn vivo102
Tongue61 ± 7%57 ± 9%53 ± 9%
Nasal cavity79 ± 11%69 ± 17%73 ± 24%
Max. sinus58 ± 19%93 ± 5%52 ± 18%
Larynx79 ± 5%69 ± 11%71 ± 9%
Hypopharynx42 ± 9%20 ± 14%99 ± 1%
Oropharynx95 ± 4%49 ± 49%78 ± 22%
Eggert et al. [167]2021Laryngeal, hypopharyngeal, and oropharyngea3D-CNN81%83%79%390–680 nm10 nmIn vivo98
Halicek et al. [168]2017SCCa trained on bothCNN74 ± 14%79 ± 15%67 ± 20%450–900 nm5 nmEx vivo50
Thyroid trained on both88 ± 11%83 ± 23%92 ± 9%
Ma et al. [169]2022Larynx, hypopharynx, buccal mucosa, and floor of mouthCNN82%72%93%470–720 nm3 nmEx vivo20
Halicek et al. [170]2020ThyroidCNN78 ± 2%80 ± 3%74 ± 3%450–900 nm5 nmEx vivo82
Salivary82 ± 8%72 ± 11%82 ± 11%
Pertzborn et al. [171]2022OralSVM76%48%89%500–1000 nm2 nmEx vivo7
Fei et al. [172]2017Oral cavityLDA90 ± 8%89 ± 9%91 ± 6%450–950 nm2 nmIn vivo16
Thyroid94 ± 6%94 ± 6%95 ± 6%
Table 4. Meta-analysis and HNC subgroup analysis results for the selected studies.
Table 4. Meta-analysis and HNC subgroup analysis results for the selected studies.
SubgroupNumber of StudiesAccuracySensitivitySpecificity
Average meta-analysis of all the studies70.770.680.80
Cancer Type
Oral30.760.480.89
Tongue10.610.570.53
Nasal10.790.690.73
Max. sinus10.580.930.52
Larynx10.790.690.71
Hypopharynx10.420.20.99
Oropharynx10.950.490.78
Thyroid20.860.870.84
Salivary10.820.720.82
Publishing Year
2015–201720.860.870.86
2018–202130.810.830.79
2022–202420.790.60.91
Method Used
CNN50.820.770.86
SVM10.760.480.89
LDA10.920.910.93
In/Ex Vivo
In vivo30.810.830.79
Ex vivo40.790.60.91
Band
VNIR50.750.690.76
VIS20.810.770.86
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, I.-C.; Chen, Y.-C.; Karmakar, R.; Mukundan, A.; Gabriel, G.; Wang, C.-C.; Wang, H.-C. Advancements in Hyperspectral Imaging and Computer-Aided Diagnostic Methods for the Enhanced Detection and Diagnosis of Head and Neck Cancer. Biomedicines 2024, 12, 2315. https://doi.org/10.3390/biomedicines12102315

AMA Style

Wu I-C, Chen Y-C, Karmakar R, Mukundan A, Gabriel G, Wang C-C, Wang H-C. Advancements in Hyperspectral Imaging and Computer-Aided Diagnostic Methods for the Enhanced Detection and Diagnosis of Head and Neck Cancer. Biomedicines. 2024; 12(10):2315. https://doi.org/10.3390/biomedicines12102315

Chicago/Turabian Style

Wu, I-Chen, Yen-Chun Chen, Riya Karmakar, Arvind Mukundan, Gahiga Gabriel, Chih-Chiang Wang, and Hsiang-Chen Wang. 2024. "Advancements in Hyperspectral Imaging and Computer-Aided Diagnostic Methods for the Enhanced Detection and Diagnosis of Head and Neck Cancer" Biomedicines 12, no. 10: 2315. https://doi.org/10.3390/biomedicines12102315

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop