Next Article in Journal
Investigating User Proficiency of Motor Imagery for EEG-Based BCI System to Control Simulated Wheelchair
Next Article in Special Issue
The Weight of Hyperion and PRISMA Hyperspectral Sensor Characteristics on Image Capability to Retrieve Urban Surface Materials in the City of Venice
Previous Article in Journal
Automated Machine Learning System for Defect Detection on Cylindrical Metal Surfaces
Previous Article in Special Issue
Recent Advances in Counterfeit Art, Document, Photo, Hologram, and Currency Detection Using Hyperspectral Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning in Medical Hyperspectral Images: A Review

1
College of Electronic and Information Engineering, Changchun University, Changchun 130022, China
2
Jilin Provincial Key Laboratory of Human Health Status Identification and Function Enhancement, Changchun University, Changchun 130022, China
3
Image Engineering & Video Technology Lab, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
4
Beijing Institute of Technology Chongqing Innovation Center, Chongqing 401120, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9790; https://doi.org/10.3390/s22249790
Submission received: 7 November 2022 / Revised: 11 December 2022 / Accepted: 12 December 2022 / Published: 13 December 2022
(This article belongs to the Special Issue Hyperspectral Imaging Sensing and Analysis)

Abstract

:
With the continuous progress of development, deep learning has made good progress in the analysis and recognition of images, which has also triggered some researchers to explore the area of combining deep learning with hyperspectral medical images and achieve some progress. This paper introduces the principles and techniques of hyperspectral imaging systems, summarizes the common medical hyperspectral imaging systems, and summarizes the progress of some emerging spectral imaging systems through analyzing the literature. In particular, this article introduces the more frequently used medical hyperspectral images and the pre-processing techniques of the spectra, and in other sections, it discusses the main developments of medical hyperspectral combined with deep learning for disease diagnosis. On the basis of the previous review, tne limited factors in the study on the application of deep learning to hyperspectral medical images are outlined, promising research directions are summarized, and the future research prospects are provided for subsequent scholars.

1. Introduction

Medical imaging refers to an imaging technology mainly used to assist in clinical work, often in the initial detection, treatment, and diagnosis of many diseases and the guidance of operations. Modern medical imaging mainly uses magnetic resonance imaging (MRI), X-ray, optical coherence tomography (OCT), ultrasound, or a combination of several techniques. These modern medical imaging modalities have had a profound impact on the diagnosis of diseases and have led to the development of more imaging techniques for clinical examinations. Deep learning has made significant progress in other medical image processing, such as optical coherence tomography (OCT), a non-invasive imaging technique that scans the subject to obtain three-dimensional high-resolution images, mainly for fundus retinal imaging, etc. The main algorithms currently focus on convolutional neural networks, support vector machines (SVM), etc. However, most of these imaging techniques are expensive and can even be harmful to the human body. Therefore, it is important to obtain an inexpensive and non-invasive imaging technique for medical images.
Hyperspectral imaging (HSI) originally originated from remote sensing and was used by NASA for various applications with richer spectral information as well as spatial information than conventional optical images. It has been broadly applied in diverse fields of remote sensing data [1,2], agriculture [3,4], image enhancement [5], horticultural protection [6,7], disaster monitoring [8], food safety and assessment [9,10], and medicine [11,12,13], showing its great potential.
The hyperspectral images consist of aligning various images in a narrow band of adjacent wavelengths or spectra and reconstructing the reflection spectra of all pixels in that band to obtain three-dimensional hypercube data. The obtained spatially resolved spectra give access to diagnostic information about tissue physiology, morphology, and composition; thus, enabling the non-invasive observation of biopsies, histopathological and fluorometric analysis, and increased understanding of the biology of the disease. Hyperspectral imaging is one of the developing imaging techniques in imaging modalities, and various spectral imaging systems have been investigated over the past decades to be used in the assessment of various biological organs and tissues. From the predominant and traditionally used whiskbroom, push broom, staring, and snapshot imaging systems developed to the fluorescent hyperspectral imaging systems, multispectral analysis, as well as separation techniques, have been implemented. The advantages of handheld hyperspectral imagers that use single-image fast spectral capture and are capable of rapid imaging are also gradually being applied in research. Spectral imaging techniques in biomedicine have attracted more attention and have gained an important position in research.
The application of medical hyperspectral imaging (MHSI) for the diagnosis of various diseases has given rise to a variety of algorithms that combine with it to enable more accurate and efficient diagnosis and classification detection of various diseases. Machine learning (ML) typically employs data and statistical models that learn and recognize patterns to accomplish particular tasks. In medical hyperspectral image (MHSI) processing, ML is mostly used in combination with MHSI for disease diagnosis and classification, detection, and segmentation of pathological images, including K-Nearest Neighbor [14] (KNN), Linear Discriminant Analysis [15] (LDA), and Support Vector Machine (SVM) methods. However, the MHSI application of deep learning (DL) methods has been increasingly proposed [16] and studied by academics. It has produced positive results ever since the large-scale image classification challenge in 2012, when a network of CNNs was introduced on the ImageNet dataset and made significant progress. For example, a study in 2017 [17] used a convolutional neural network (CNN) to classify blood cells in MHSI, distinguishing red cells as well as white cells. In medical hyperspectral datasets, CNNs clearly outperform the conventional SVM in classification accuracy, demonstrating the huge promise of deep learning (DL) in this field [18].
Over the past decade, a number of pioneers in this field have assembled correlative references and compared the parameters of common medical-imaging techniques [19]. These studies illustrate the convenience that hyperspectral imaging brings to the field of medical bioengineering in comparison to traditional optical imaging methods, allowing for a greater wealth of information than was previously available. One study discussed the ongoing advancement of bio-medical hyperspectral systems and the parallel of the approaches to imaging and presented the current challenges [13]. This study presents a contributes to the extant literature by providing a well-balanced integration of academic opinion, and practical perspectives. Subsequent articles have combined techniques such as acquisition mode, spectral range, and spatial resolution, and measurement mode to classify MHSI. Methods for image analysis, as well as disease diagnosis and surgical guidance, were also summarized [11]. Along with the growth of deep learning, it has attracted a lot of followers to study this type of field. Some authors have also made a summary of medical hyperspectral imaging in the field of deep learning and discussed the deep learning approach and how this approach is applied in the medical field [20].
However, the existing studies of hyperspectral medicine are fragmented and not comprehensive, while DL is rapidly emerging, and the related studies are complicated but lack a theoretical foundation. Therefore, a clear context is needed to link hyperspectral images, hyperspectral medicine, and DL. An overview of the development experience of other associated and also relatively mature research fields will provide a reference for later scholars to develop this nascent field. Through a review of relevant literature readings, a synthesis of key technical insights from current research, and a revelation of major research trends in this field, this study intends to answer the following research areas: the development of hyperspectral imaging systems; the mainstream architecture of deep learning for MHSI applications; and the problems addressed in medical diagnosis.
With the development of DL in MHSI, more attention will be drawn to its application. It will have a remarkable influence on the medical field, especially on the diagnosis of diseases and the guidance of surgery. This paper introduces various imaging systems for hyperspectral imaging as well as the usage of deep learning to classify, segment, and detect medical images and also gives a brief introduction to the application of hyperspectral imaging in medical applications.

2. Hyperspectral Imaging Technology

2.1. Imaging Principles and Techniques

Hyperspectral imaging is a modality that combines imaging with spectroscopy. It usually covers a continuous part of the spectrum and provides continuous scanning imaging of tens or hundreds of spectral ranges at ultraviolet (UV), visible (VIS), infrared, and even mid-infrared wavelengths [12]. As illustrated in Figure 1, which contains both two-dimensional spatial and one-dimensional spectral information, or as a superposition of several two-dimensional images [21]. It is possible to obtain the reflectance, absorption, or fluorescence spectra of every pixel in the image by this technique. It has a richer spectral band as well as a higher spectral resolution than conventional RGB images and grayscale maps. It can see changes in objects that are not visible with conventional imaging techniques and captures minor spectral nuances in response to different pathological conditions.
The system mechanism of HSI is elucidated by the typical push broom hyperspectral system principle [11], as shown in Figure 2. First, a light source is irradiated to the spatial information and passes through the front lens into the slit, where different wavelengths of light are bent to varying degrees. Then, each pixel point in that dimension is shone on the detector through dispersion devices such as gratings and prisms to split the light into narrow spectral bands. Each row of sample space information is treated as a two-dimensional image and imaged on the detector array. Moving through the plane by a mechanical push sweep, the HSI camera collects adjoining two-dimensional images, resulting in a hypercube with two spatial dimensions and one spectral dimension.

2.2. Imaging System

A typical medical hyperspectral imaging system can be categorized into: an optical acquisition instrument; spectral spectrometer; detector; system control; and data acquisition module [13]. Optical acquisition instruments indicate devices that produce images on a spectral spectrometer, such as a camera-like instrument that has a real image or a microscope that has a virtual image. After collecting and summarizing, the HSI imaging systems for medicine are listed in Table 1.

2.2.1. Acquisition Mode

According to the acquisition methods [22] of hyperspectral systems for spectral as well as spatial information, hyperspectral systems are classified into four typical methods: whiskbroom; push broom; staring; and snapshot [13], as shown in Figure 3.
Whiskbroom imaging systems, also known as point-scan methods, typically employ a rotating scanning mechanism that sweeps a single point clockwise along the spatial dimensions (x and y) and spectral dimensions (λ). The signal to the biological tissue is sequentially passed through the swivel scanning mirror and the front optical system, and then the spectral spectrometer is spectroscopic and imaged on the CCD. The spectral image data cube (x,y,λ) can be obtained by the spatial dimension (x and y) along with the spectral dimension (λ) obtained from the two-dimensional scene.
  • Push broom
The push broom imaging system is also known as the line-scan method. Unlike the point scan, the line scan can simultaneously scan to obtain the spatial information of the slit and sweep once for the spectral info of every spatial point. The biological tissue light signal will pass through the objective lens, the incident slit, the collimation module, and then the dispersive element to complete the spectroscopy and image on the CCD in turn.
The CMOS push broom hyperspectral camera, TIVITA, is often used in medical biological tissue detection to generate HS images at a spectral resolution of 5 nm captured at a spectral range of 500–1000 nm spectral range, generating a 640 × 480 × 100 data cube with an acquisition time of about 6–7 s. It can also be used for RGB image reconstruction based on HSI. The diameter between the camera and the tissue depends on the objective lens, which is usually between 30–50 cm. For acquisition, the operating room lights are turned off to avoid interference from external light sources and installed on a mobile and agile medical system, as shown in Figure 4B and described by the following references in detail [23,24].
Push broom imaging usually acquires a greater amount of light than whiskbroom imaging, providing a longer exposure time and higher spectral resolution for the detector [25]. These two imaging systems do not display the spectral image in real time, which is derived from spectral calculations when the scanning of the corresponding points and regions is completed;
2.
Staring
The staring-type imaging system, also called the spectral scanning method, can simultaneously capture a single-band two-dimensional grayscale image of complete spatial information with spectral data cubes obtained by sweeping the spectral domain in multiple bands. The staring type uses filters such as liquid crystal tunable filters (LCTF) and acousto-optic tunable filters (AOTF) [26] to complete scanning of the spectrum, followed by a focusing optical system that is filtered to produce a narrow spectral band and imaged in the detector focal plane. Therefore, two-dimensional image information in one band is usually captured, and the information imaged in different bands is stacked to form an image cube [27]. The cube constructed by wavelength scanning offers the merit of revealing spectral information in real-time, which is important for targeting and focalization [28]. Due to the short acquisition time, it is easy to couple with some optical instruments, such as cameras, endoscopes, or microscopes, and is widely used in biomedicine for detecting ex vivo tissues, etc.;
3.
Snapshot
Snapshot imaging systems can record spatial as well as spectral information on the detector in a single exposure area, and the snapshot mode does not require scanning in spatial and spectral dimensions, resulting in limited spatial as well as spectral resolution. Consequently, for a given CCD, spectral sampling can be compensated for by increasing the sampling space [29]. The snapshot imaging system differs from the whiskbroom, push broom, and staring modes in that the imaging regime does not require scanning to be imaged and can engage both remapped and scattered images to be imaged onto the CCD detector [30]. Thus, the obtained data, through direct and simple processing, can construct a spectral data cube. Nevertheless, the strength of this image is that it allows for rapid experiments and is usually suitable for rapid process studies, such as endoscopic inspection.
Table 1. Application areas of hyperspectral imaging systems and medicine.
Table 1. Application areas of hyperspectral imaging systems and medicine.
ReferenceSpectral Range (nm)Spectral Resolution/nmDetectorSpectral SpectroscopyAcquisition ModeApplications
[31]450~900 CRI Maestro imaging systemLCTF Tumor margin classification
[32]430~680 Monochromatic
CCD-camera
In vivo tumors
[33]450~9005CRI Maestro imaging systemLCTF Head and neck cancer
[34]500~9955TIVITA Tissue Camera Push broomEx vivo kidneys classification
[35]350~1000>1Micro-hyperspectral imaging systemPGP Stomach Cancer Classification
[17] Silicon charge-coupled devicesLCTFs Blood cell classification
[36]400~720 CCDLCTF Blood cell classification
[37]500~10005TIVITA Tissue Camera Push broomTissue classification
[38]400~10002~3VNIR camera, HELICoiD demonstrator, Si CCDLCTFsPush broomBrain cancer detection
[39]430~920 Hyperspectral line-scan camera (IMEC) Push broomColon cancer classification
[40]477~891 SICSURFIS Spectral ImagerFPIHand-heldSkin Tumors
[41]450~9508Snapshot HS camera SnapshotSkin Cancer
[42]400~10002.8CCD Push broomBreast cancer cell detection
[43]450~950 CRI Maestro imaging systemLCTF Head and neck cancer
[44]500~10005TIVITA Tissue Camera Push broomEsophageal cancer classification
[45] Spatial-scanning hyperspectral endoscope (HySE) Push broomEsophageal cancer
[46]450~950 CCDFPISnapshotSkin feature detection
[47]400~10002.8Microscopic HS camera, CCDPGPStaringBrain cancer classification
[48,49]450~9005CRI Maestro imaging systemLCTF Head and neck cancer
[50]500~10005TIVITA Tissue Camera Push broomSurgical Instruction
[51]400~10002.8CCD Push broomBrain tissue
[52]486~700 SnapScan hyperspectral camera Head and neck cancer
[53]450~900 CRI Maestro imaging system, CCDLCTF Head and neck cancer
[54]400~1000
900~1700
Hyperspectral cameras Push broomTongue tumor detection
[55]550~10007.5CCDAOTF Melanoma segmentation
[56]500~10005TIVITA Tissue Camera Push broom
[57]500~10005TIVITA Tissue Camera Push broomTissue segmentation
[58]450~680 CMOSLCTF Stomach Cancer Classification
[59]900~1700 InGaAs Hyperspec® Push broomStomach Cancer Classification
[60]450~950 CRI Maestro imaging system, CCDLCTF Head and neck cancer
[61]510~9006~10Compact imaging systemFPIHand-heldDiabetic skin complications
[62]500~10005HSI LaparoscopeMonochromatorPush broomExcised tissue reflectance measurement
Note: LCTF, liquid crystal tunable filter; PGP, prism-grating-prism; AOTF, acoustic-optical tunable filter; FPI, Fabry–Pérot interferometer.

2.2.2. Fluorescence Hyperspectral Imaging System

The CRI Maestro (Caliper Life Sciences, Inc. (Nasdaq: CALP), United States of America) Hyperspectral Imaging System [53,63,64] allows the acquisition of hyperspectral images of in vitro surgical specimens. Spectral scans were usually performed using a liquid crystal tunable filter (LCTF) and a 300-W photocatalytic xenon light source (Cermax-type, 300-Watt, Xenon light source, Excelitas Technologies Corp, America) [65]. The system combines multispectral imaging and its analysis to acquire each pixel point in the spectrum range from visible through near-infrared. Combined with the spectral information from the object, it achieves multispectral analysis, separation, and other techniques to become an in vivo fluorescence imaging technique with high accuracy and sensitivity.

2.2.3. Handheld Hyperspectral Imaging System

Unlike traditional push broom hyperspectral imagers, handheld hyperspectral imagers use fast spectral capture of a single image and are capable of rapid imaging. The small form factor and simple operation reduce the complexity of handling common imaging.
Raita-Hakola et al. [66] presented the SICSURFIS handheld hyperspectral imaging system [40], as shown in Figure 5. It is a compact, handheld, piezoelectrically driven metallic mirror-based Fabry–Pérot interferometer (FPI) hyperspectral imager. It consists of a prototype handheld piezoelectric metal-mirror FPI hyperspectral imager, an RGB sensor, and an LED light source. The light source is a series of three purposely selected nine LEDs that can deliver light in the range of white to 940 nm. It is almost as fast as the snapshot spectral imager adapted to complex skin surfaces, and allows for stereoscopic imaging by tilting at given angles. The imager thus provides spectral images at different angles for photometric stereo calculations, allowing for skin surface modeling on each captured wavelength. As shown in Figure 5, left below, the mode, spectral separator, and LED of the spectral imager’s HSI are all independently controllable and can be configured arbitrarily and efficiently by software.
Besides the SICSURFIS handheld hyperspectral imaging system described above, another one is the compact imaging system [61,67]. This imaging system is built on a hyperspectral snapshot camera and uses FPI to provide a spectral resolution of 6–10 nm in a wavelength spectrum of 500–900 nm. As shown in Figure 6, this system can image randomly selected skin areas, where (a) is the detection of the skin of the palm of the hand and (b) is the detection of the dorsum of the foot in diabetic patients.

3. Medical Hyperspectral Image Analysis

Analysis of acquired hyperspectral images, especially medical hyperspectral images, can extract important information about diagnosis and treatment from some tissues and cells and is therefore very important for medical diagnosis and clinical applications. At the same time, since hyperspectral images usually also contain visible spectra and hundreds of spectral bands, they are regarded as a hypercube, which provides rich spectral information for image analysis and has the advantage of high spatial and high resolution to obtain more useful information, but at the same time, due to the high dimensionality, it is also more difficult to analyze, which can lead to data redundancy and dimensional disaster. Band selection can solve the problem of dimension disaster to a certain extent. Table 2 shows a comparison of the six band selection methods. Table 3 lists the image preprocessing operations used in the literature; Table 4 lists the deep learning architectures used in the literature.

3.1. Image Pre-Processing with Spectra

3.1.1. Normalization

After the acquisition of hyperspectral data, since the data have factors such as high-level dimensionality, intra-image band redundancies, and instrument noise, they need to be processed by some common preprocessing algorithms to obtain the hyperspectral data, which removes unnecessary noise.
The hyperspectral radiation observations are normalized to eliminate the spectral inhomogeneity and dark current effects of the illumination device. Spectral features based on uniform reflectance will be obtained for feature extraction. First, the raw radiation data are converted to normalized reflectance [68,69,70] using (1):
I r e f = I r a w I d a r k I w h i t e I d a r k
where I r e f is the acquired normalized reflectance; I r a w is the original HS image; I w h i t e is the white reference image; and I d a r k is the dark reference image acquired using the acquisition system.
One of the most popular normalization techniques is Standard Normal Variate (SNV). SNV is usually applied to resolve scattering effects caused by the presence of particles of different sizes on the surface of an object, to reduce the inhomogeneity of the particles, and to resolve the effects caused by the NIR diffuse reflectance spectrum. SNV usually deals with one of the spectra, and its Equation (2) shows the transformed spectrum:
x S N V = x x ¯ k = 1 m x k x ¯ 2 m 1
where x ¯ = k = 1 m x k m ; m is the number of wavelength points; k = 1, 2, ..., m.

3.1.2. Smoothing Denoising

Spectral noise is present in the discovery of spectral features, and the HS sensor has a poor response across some bands, which should be removed. Smoothing filters are then used to filter the HS data [71] to diminish the random spectral noise in the remaining spectral bands. Smoothing filtering is the simplest and most effective method to eliminate noise, and algorithms such as window shifting and least squares are usually used, in which Savitzky–Golay [72] smoothing can greatly preserve data characteristics such as relative extremes and widths and accomplish smooth denoising of the original spectrum.

3.1.3. Wave Selection

The choice of waveband [73] is an important tool and probably the most effective and direct method that can alleviate hyper-spectral data redundancy. It aims to pick a tiny subset from the hyperspectral bands, i.e., to select some information-rich and distinctive features from the original hyperspectral image cube, which reduces the calculated cost and maintains the physical characteristics of the bands [74].
The selection of hyperspectral bands in the current study is broadly divided into six types: ranking-based; search-based; clustering-based; sparsity-based; embedded learning-based; and hybrid scheme-based [74]. Since clustering algorithms only consider the redundant information of spectral bands and ignore the amount of information in the subset of bands, Wang et al. [75] developed a new method to select bands by an adaptive subspace partitioning strategy and achieved good results in terms of accuracy as well as efficiency. Sun et al. [76] proposed rapid and potential spectral bands for low-rank subspace clustering selection, with higher classification accuracy and lower computational cost as the end result.
Table 2. Comparison of six band selection methods.
Table 2. Comparison of six band selection methods.
PrincipleAdvantagesDisadvantagesDifferences and Similarities
Ranking-basedUse a suitable function to quantify the amount of information in each band, and then select the top subset of bands according to their importanceLow computational complexity and fast execution of calculations for larger hyperspectral datasetsCorrelation between bands is often not consideredSearch-based, sparsity-based, and embedding-learning band selection methods are all optimization problems with objective functions; ranking-based and clustering-based band selection methods are all based on the importance of bands. And all band selection methods are designed to select the combination of bands with high information content, low correlation between bands, and best class separability.
Search-basedThe optimization problem of the criterion function is a multi-objective optimization to find the optimal frequency bandOnly individual bands are considered, ignoring the entire subset of bands optimizedComputationally intensive and difficult to apply in practice
Clustering-basedThe representative subset of frequency bands in the cluster of the component groupEntire subset of bands can be optimized; less affected by noise; simple algorithmPoor robustness, easy to fall into local optimal solutions
Sparsity-basedObtaining representative bands by dealing with sparsely constrained optimization problemsCan reduce the complexity of hyperspectral data processing; reduce storage space; improve model interpretabilityDifficulty in automating model applications; uncertainty in model processing performance
Embedded learning-basedOptimize the objective function of a specific model and select the appropriate spectral bandAvoids repetitive training of the learner for each subset of bandsPerformance-dependent parameter tuning and difficult objective function construction
Hybrid scheme-basedA synthesis of several band selection algorithmsCan find the best combination of frequency bands to get the least number of useful bandsAlgorithm complexity

3.1.4. Feature Dimensionality Reduction

Feature downscaling, in other words, feature extraction, is also an important tool. However, feature extraction is the transformation of the primitive hyperspectral data by a linear or nonlinear mapping into lower dimensions, and the effective information is retained for subsequent analysis. Examples of typical methods involve principal component analysis (PCA) [77,78], linear discriminant analysis (LDA) [79], minimum noise fraction (MNF), and independent component analysis (ICA).
The principal component analysis has been the best-used method to decrease the dimensionality of hyperspectral features, improving interpretability without losing much information. It is a statistical technique that retains the maximum amount of information and eliminates redundant noise and data. The MNF transform is intrinsically two simultaneously cascaded PCA transforms designed to decrease spectrum dimensionality and separate noise out of image data. ICA enables spectral features that are as separate as possible, which is also a useful extension of principal component analysis. The critical idea in the independent component analysis is to assume that the data are amalgamated linearly across a set of individual sources and decompose them in terms of the statistical independence of the cross-information measures.
Table 3. Methods of image pre-processing.
Table 3. Methods of image pre-processing.
NormalizationSmoothing DenoisingWave SelectionFeature Dimensionality ReductionCalibrationRemarks
[43]Normalized reflectance spectra
[31]Normalized reflectance spectra
[33]Normalized reflectance spectra
[63]Normalized reflectance spectra Glare Removal
[34]Normalized reflectance spectraSavitzky–Golay smoothing Manual background segmentation, automatic region of interest (ROI) selection
[80] PCA
[35] Savitzky–Golay smoothing PCA First-order derivation for spectral dimension preprocessing
[17] PCA
[36] PCA
[37]SNV
[81]SNV
[82]Normalized reflectance spectra PCA
[38]Normalized reflectance spectra Fixed Reference t-Distributed Stochastic Neighbors Embedding HySIME noise filtering and extreme noise band Removal and spectral averaging
[83]Normalized reflectance spectra PCA
[39]Normalized reflectance spectra PCA
[84] Shannon entropy
[40] Machine learning pre-processing
[85]Normalized reflectance spectra PCA Singular Spectrum Analysis (SSA)
[41]Normalized reflectance spectraSmoothing filter noise processing
[86] PCA
[31]Normalized reflectance spectra
[42]Normalized reflectance spectra
[87]Normalized reflectance spectraSmoothing filter noise processing
[88] ICA K-means
[43]Normalized reflectance spectra
[44]Normalized reflectance spectra
[45]Normalized reflectance spectra
[89]Normalized reflectance spectra
[90]Normalized reflectance spectra ACO Band Selection for Ant Colony Optimization (ACO)
[91] PCA
[46]Normalized reflectance spectra PCA
[47] Ratio between original Image and reference image
[48]Normalized reflectance spectra
[51]Normalized reflectance spectra PCA
[52] PCA
[49]Normalized reflectance spectraSmoothing filter noise processing
[53]Normalized reflectance spectra
[54] PCA
[55] PCA
[56] Median Filter
[57]Normalized reflectance spectraSavitzky–Golay smoothing, gaussian filtered spatial smoothing PCA Outlier removal, background recognition
[92]Standard normalization transformationGaussian filtered spatial smoothing
[59]Normalized reflectance spectra
[60]Normalized reflectance spectra3-order median filter, curvature correctionGFP bands removal Background removal
[61]Normalized reflectance spectra
[62]Normalized reflectance spectra

3.2. Classification

The categorization of medical highlight images (MHSI) represents an area where medical analysis was first applied, and now MHSI is becoming increasingly popular for medical diagnostic applications. Hyperspectral images with high resolution can provide richer spectral features for classification tasks, and this technique is mostly used for cancer detection and classification as well as for cell classification [93]. Previously traditional machine learning methods were often used for classification of medical hyperspectral images, ML uses data and statistical models for learning and recognition tasks and can make decisions with supervision or unsupervised. For example, Torti et al. [82] first used a supervised classification algorithm consisting of PCA, SVM, and KNN for classification and then used it in combination with a K-mean clustering algorithm (K-means) for the final weighted classification to correctly classify normal and cancerous tissues; Fabelo et al. [38] used a combination of supervised as well as unsupervised methods, using SVM for supervised pixel classification, then taking a t-Stochastic Neighbors Embedding dimensionality reduction algorithm, and finally the segmentation maps generated by combining unsupervised clustering were accurately identified at the margins of the neoplasms.
DL is a deep neural network-based approach. Compared with ML, DL does not need to set the features manually, and obtains great learning ability by increasing the number of layers of the network and calculating the weight parameters automatically, and then continuously learning the features of various data. As an end-to-end network model, it has advantages in image processing and is widely used in the classification of hyperspectral images. In traditional machine-learning algorithms for classification, the features for classification are represented by a one-dimensional vector. In contrast, HSI is multidimensional data consisting of two-dimensional image information and one-dimensional spectral information, and the amount of information contained in images at different wavelengths is different. Each image element is composed of hundreds of spectral bands, containing rich spectral features. So, it is necessary to downscale the multidimensional data during processing; then, the final extracted features will only contain spectral information, and spatial information will be ignored. Therefore, more and more scholars focus on the dual branch structure of simultaneous extraction of spatial as well as spectral features. Using deep learning methods, one can not only extract features in the spatial dimension by convolution but also have the convolution kernel slide into the spectral dimension to extract high-level spectral features. This method of simultaneous extraction of spatial information from hyperspectral images along with spectral features makes the extracted information richer and makes operations such as classification more accurate.
In contrast to the use of deep learning in other fields, the maturation of deep learning for MHSI has taken some time. Earlier, MHSI used artificial neural networks to classify cancer, and Nathan et al. [83] used an algorithm combining hyperspectral imaging with machine learning, i.e., using support vector machines (SVMs) and artificial neural networks (ANNs) to distinguish between different types of cancer.
Recently, convolutional neural networks (CNNs) have been largely used for the classification of MHSI. Huang et al. [36] 2018 proposed the extraction of deep features of MHSI using CNNs combined with Gabor filters, called the GFCNN model, to enhance the classification of hemocytes under small samples. In subsequent years, Huang et al. [80] further applied MGCNN, an in-depth convolutional network with a Gabor filter classification framework, to classify blood cells. Wei et al. [86] constructed an EtoE-Net model consisting of a two-channel CNN with pixel-by-pixel mapping between the original MHSI as well as the main band images to form globally fused features. Global as well as local features were extracted in the dual-channel CNN, and multiple features were expanded and connected into a superposition vector for fusion. Ultimately, this model has the highest classification performance when compared with traditional machine learning methods. Wang et al. [84] presented a deep-hyper 3D convolutional network that combined 3D-CNN with a 3D attention module in an ultra-deep network for leukocyte classification. The experimental outcomes demonstrated the highest accuracy by placing the attention module at the final layer of the network to classify. Besides the categorization of cells, there are so many medical image classification applications using deep learning in the case of some cancer diagnoses, and most studies have used hyperspectral imaging with convolutional neural network (CNN) classifiers for cancer cell classification [31,33,35,42,85,91,94]. For example, Sommer et al. [34] classified nephrons by using CNNs based on HSI data, specifically by residual neural networks (ResNet). Li et al. [58] used a deep learning architecture with ResNet34 on fluorescent hyperspectral images for the classification of gastric cancers, and the model achieved classification accuracy, specificity, and sensitivity of more than 96%. Bengs et al. [32] investigated in vivo tumor category classification challenges of a more challenging nature using HSI and various deep-learning approaches. A more efficient convolutional gated recurrent unit (CGRU) was used to descend a three-dimensional hyperspectral cube. A CNN following a densely connected convolutional neural network (DenseNet) was then used to handle the two-dimensional data for final classification. Grigoroiu et al. [45] are implementing online classification of data from HSI endoscopy by CNN to stain and analyze different disease stages of the pig esophagus as well as the human esophagus. Spatially distinct colors were shown and validated the properties of deep learning algorithms using color-based classification methods, showing that pixel-level classification is possible for hyperspectral endoscopic data with 18 pure color spectra, reflecting the great potential of CNNs offering color categorization in real-time endoscopic HSI.
The U-net network is the most used and effective network in the medical area. It was initially used only for image segmentation, but later it was gradually used for classification and detection. Since this network is not applicable to the analysis of hyperspectral images, Manifold et al. [95] came up with the U-within-U-Net (UwU-Net) framework, which can classify, segment, and predict orthogonal imaging patterns using various hyperspectral imaging techniques. Prediction of multiple drug locations in rat liver tissue imaging was performed by an external U-Net processing spectral information and internal U-Net processing spatial information.

3.3. Detection

Medical hyperspectral images are usually less used in detection and have more potential for development. Usually, in clinical medicine, detection in pathological images is something that can be used in the future as a key part of the diagnosis. One of the papers used a combination of wavelet transform features as well as machine learning, mentioning the use of a discrete wavelet transform (DWT)-based feature classification method [60]. The average spectra of blocks of pixels of the same size were extracted from cancerous as well as normal tissues, respectively, as the original spectra, and a support vector machine (SVM) was used to classify the original data as well as the extracted wavelet features. A tumor mask was generated for the images to distinguish between detecting cancerous as well as normal tissues, and experiments showed better discrimination of overlapping spectra based on wavelet feature classification.
Usually, MHSI applies a CNN network to apply pixel-level classification of medical hyperspectral images to the detection of tumors. There are more detections of head and neck cancers where a two-stream convolutional model [96], with spectral as well as structural branches, was used to detect the hyperspectral data of tongue squamous cell carcinoma obtained from scanning and divided into three regions as tumor, healthy muscle, and epithelium, and finally, the results of the two-stream model outperformed the pure spectral and pure structural methods. Beng et al. [97] advanced a technology to detect in vivo pharyngeal cancer by inputting spectral dimension stacking into a DenseNet2D-MS network used by DenseNet-Blocks with 3D convolutional blocks connected to extract spatial–spectral information. Finally, we detected tumors and healthy tissues after classifying the results using global average pooling (GAP) and classification layers. On the other hand, Halicek et al. [48] used an inception-v4 CNN architecture and introduced a gradient-like activation mapping algorithm to investigate the detection capability with hyperspectral images for cancer detection. It was shown that HSI could help surgeons and pathologists detect tumors in glands. In another paper, a single-stream U-net architecture composed of stacked visible (VIS) and near-infrared (NIR) light was applied [54] to achieve real-time segmentation of hyperspectral imaging in surgery. For the first time, deep-learning semantic segmentation of HSI data was used for tumor detection, and experiments demonstrated the importance of NIR spectroscopy for tumor capture.
In addition to this, it was also used in the detection of other diseases. Examples include breast cancer cell detection through the development of a hyperspectral imaging microscope and deep learning software for digital pathology applications [42]. A manual local feature detection method and the feature detection method grounded in deep learning were adopted for detecting features below the skin surface, demonstrating the ability of the system to track skin features and that the deep-learning skin features were detected and localized better than the local manual features [46].

3.4. Segmentation

During medical image segmentation, mostly outlines are sketched out in the image so that the outline of some organs or important parts can be clearly seen as a reference. This operation is important to distinguish some organs in the human body, such as the brain, etc., for medical diagnosis. In recent years, there has been less segmentation for medical hyperspectral images, but the article method is relatively new.
An investigation used a hybrid machine learning, and HSI approach [57] applied to tissue segmentation for image-guided surgery of the liver as well as the thyroid, and seven machine learning models were performed. For each model except U-Net, spatial analysis was performed at three levels: no-spatial analysis; single-scale analysis; and multi-scale analysis. The experimental results for the liver showed that U-Net could identify tissues with high accuracy and achieve optimal segmentation performance. SVM with RBF combined with multi-scale spatial analysis obtained suboptimal performance. In the tissue recognition of HSI data of the thyroid, LR combined with multi-scale spatial analysis segmented with the highest efficiency. Garifullin et al. used dense full convolutional networks (Dense-FCNs) combined with the SegNet model [98] to jointly segment retinal vessels, optic discs, and macula using hyper-spectral retinal images and also experimented on RGB images. The comparison showed that the spectra can provide some additional information about the visual disc and macula and improve recognition performance.
The U-Net architecture is mostly used in medical segmentation, and the main novelty of this architecture is the combination of equal up-sampling layers as well as down-sampling layers, on which most segmentation networks are nowadays improved. Trajanovski et al. [99] segmented squamous cell carcinoma tumors in a U-Net network by randomly selecting 100 patches of 256 × 256 size from each patient’s dataset to be fed into the U-Net network. Due to the selection of larger patch blocks, the spatial background occupies a larger area and provides better performance than pixel-level spectral and structural methods while demonstrating the importance of infrared spectroscopy for the analysis. After that, a single-stream U-Net composed of stacked visible light along with infrared light was published again, confirming the importance of infrared spectroscopy [54]. To make full use of spectral features in 3D hyperspectral data, Wang et al. [55] proposed Hyper-Net, a 3D full convolutional encoding and decoding network for the segmentation of hyperspectral pathology images of melanoma. To preserve the fine features lost due to depth, a dual path was used in the final encoding part with the addition of extended convolutional fast extraction of low-resolution fine-grained features, which significantly improved the segmentation accuracy. Seidlitz et al. [56] combined the visceral tissue oxygen saturation (StO2), near-infrared perfusion index (NPI), tissue water index (TWI), and tissue hemoglobin index (THI) of organic correlation images were overlaid on the cube input model. The neural networks were trained in each of the three input networks according to the studied data granularity levels (pixel-based, super pixel-based, patch-based, and complete image-based). The study demonstrated that the unprocessed HSI data have great advantages in organ segmentation.
In subsequent studies, there is the embedding of a transformer into the coding part of U-Net [100] and applying it in the segmentation of images, which can learn the dense correlation between bands. Having the benefits of both transformers and U-Net, it is more capable of segmenting medical images. However, the acquired information is susceptible to the influence of uncorrelated bands. Therefore, a sparse scheme is introduced to form the spectral transformer SpecTr, which is experimentally shown to be superior to 3D-UNet and 2D-UNet.
Table 4. Summary of common deep learning architectures and methods.
Table 4. Summary of common deep learning architectures and methods.
ReferencesArchitectureMethodsDetailed methodApplications
MLCNN3D CNN2D CNNDenseNetResNetUNetAlexNetFCNClassificationDetectionSegmentation
[31] 2DCNN + 3DCNN + Inception CNNHead and neck cancer
[101] CNN extracts topological embeddings, and in using binary classification
[32] DenseNet classification after dimensionality reduction using convolutional gated cyclic unitsIn vivo Tumors
[33] 3D CNN and 2D inception CNNHead and neck cancer
[63] CNN classifierHead and neck cancer
[34] KidneyResNet consisting of Resnet-18Ambient infusion
[80] Combining modulated Gabor
and CNN in the MGCNN framework
Red blood cells
[35] Spectral-Spatial-CNN with 3D convolutionStomach Cancer
[17] CNN training with different patch sizes after PCA dimensionality reductionRed blood cells
[36] Gabor filter and CNNRed blood cells
[37] CNNTissue classification
[81] Compare the classification performance using (RBF-SVM), MLP, and 3DCNNStomach and Colon Cancer
[82] Combining PCA, SVM, KNN classification with K-means for final weighted voting classificationBrain tumor
[83] SVM combined with ANN for classificationIdentification of cancer cells
[39] HybridSpectraNet (HybridSN) composed of 3D CNN and 2D CNN in spectral spaceColon Cancer
[84] 3D CNN combined with 3D attention module for deep hypernetworksWhite blood cells
[40] SICSURFIS HSI-CNN system composed of SICSURFIS imager and CNNSkin disease
[85] Stacked auto encoder (SAE)Tongue coating
[93] White blood cells
[41] K-means and SAMSkin disease
[86] Two-channel deep fusion network EtoE-Fusion CNN for feature extractionWhite and red blood cells
[42] Mapping RGB to high broad-spectrum domain with 2D CNN classificationBreast cancer
[95] The external U-Net handles spectral information, and the internal u handles spatial information, making up the UwU-Net classificationDrug position
[18] Regression-based partitioned deep convolutional networksHead and neck cancer
[94] 1D, 2D, 3D CNN, RNN, MLP, SVM for comparisonBlood Classification
[87] U-Net, 2D CNN, 1D DNN combined with classificationBrain cancer
[43] Extracting image elements into patches into CNNHead and neck cancer
[44] RF, SVM, MLP and K-Nearest Neighbor ComparisonEsophageal Cancer
[45] Pixel-level classificationHead and neck cancer
[89] AlexNet combined with SVMCorneal epithelial tissue
[90] Hybrid 3D-2D network for extracting spatial and spectral featuresBrain cancer
[91] CNN with support vector machine (SVM), random forest (RF) synthetic classificationTissue classification
[102] LDASepticemia
[48] CNN architecture for inception-v4Head and neck cancer
[103] CNN architecture for inception-v4Head and neck cancer
[51] 2D CNN classificationBrain cancer
[52] RF, logistic regression, SVM comparative classificationHead and neck cancer
[58] ResNet34Stomach Cancer
[92] RF, SVM, MLPColon Cancer
[59] PCA downscaling, Spectral Angle Mapper (SAM)Stomach Cancer
[60] Discrete Wavelet Transform (DWT) based feature extraction, SVMHead and neck cancer
[96] Dual-stream convolution modelTongue Tumor
[97] DenseNet-Blocks combined with 3D CNN to extract spatial spectral informationHead and neck cancer
[46] CNN with Deep Local Features (DELF)Skin Features
[49] CNN and SVM + PCA + KNN are used, respectivelyHead and neck cancer
[99] Select the channel and use U-NetHead and neck cancer
[55] 3D full convolutional network with extended convolutional fast and fine-grained feature dual pathMelanoma
[100] The encoding part of U-Net uses transformer to extract the spectral information and convolution to extract the spatial information jointlyCarcinoma of bile duct
[56] Pixel-based, superpixel-based, patch-based, and full image-based data are fed into the CNN and U-Net, respectively
[57] Seven machine learning models and U-Net were used for the study, respectivelyImage-guided surgery
[98] SegNet and dense full convolutional neural networks are usedEye diseases

3.5. Conclusions

Through the process of reading and organizing the literature, we conclude that there are some common machine learning and deep learning models in the medical hyperspectral field that can be used many times and show good results. In the models of classification and detection, most of them are improved by using common CNN and Resnet, especially 2D CNN, 3D CNN, and 2D CNN combined with 3D CNN, to extract spatial and spectral features in hyperspectral images. In image segmentation tasks, mostly the classical U-Net’s full convolutional network, are used in a variant to obtain more efficient models.
In the literature, the network models use the Inception multiscale processing module, the 3D attention module, the transformer, the Gabor filter, the discrete wavelet transform (DWT), and the dilated convolution block (DCC) to enhance the feature extraction. These are also some good research directions that can be of great help in subsequently improving the model’s performance.
When calculating the error between the predicted and true values of a model, the loss functions of cross-entropy loss, SoftMax loss, R-square, root mean squared error (RMSE), and mean squared error (MSE) are usually used to measure the degree to which the model fits the data.

4. Medical Hyperspectral Image Application Area

4.1. Medical Diagnosis

As the resolution of hyperspectral medical images has increased, most of the mainstream research methods now use a combination of spectral features of hyperspectral images and spatial features, which not only extracts the rich spectral information of hyperspectral images but also integrates the extraction of texture structure and detailed information of the images, which greatly improves the classification accuracy. Optical imaging for cancer detection is presented since lesions lead to changes in cell morphology and cause changes in absorption, scattering, and fluorescence properties. So, optical tissue characterization can conversely supply worthy diagnostic messages. HSI can obtain broad-area images of tissues, improving diagnostic accuracy when diagnosing conditions such as stomach, breast, cervical, skin-like diseases, and head and neck. In Table 5, different methods for medical hyperspectral image application areas and a comparison of the different achievements are presented.

4.1.1. Stomach Cancer

Most studies in the last decade or so can play a key part in early cancer detection, and tumor detection can help doctors diagnose cancer and dissect malignant tumor areas when they can be at a safe margin.
Liu et al. [59] used a NIR-HSI system to capture hyperspectral images of gastric tissue and extracted the average spectrum and normal deviation of normally pixeled and post-cancerous image pixels. The dimensionality of the hyper-cube values was squeezed using principal component analysis (PCA), and six were selected as the optimal wavelengths. In addition, the normal and cancerous tissue were categorized using a spectral angle mapper (SAM), and eventually the SAM achieved a classification index of 90% accuracy.
Collins et al. [81] performed detection experiments using the support vector machine with radial basis function kernel (RBF), MLP, and 3DCNN approaches on data containing 12 colon cancer patients as well as 10 esophageal cancer patients, respectively. The final experimental results show that 3DCNN performs better on both datasets. It is also proposed that the use of interactive decision thresholding can be applied in future surgical procedures and be used with high value to improve the classification performance.
Hu et al. [35] built a classification model with an efficient joint CNN to extract tumor deep-spectrum spatial features that facilitate classification. Based upon those differences between gastric cancer organization and regular tissue microscopic hyperspectral features, experiments were conducted on a 30-patient dataset of stomach cancer hyperspectral data. It was demonstrated that the simulation model’s classification rate of both cancerous and natural tissues was more than 97% in accuracy, as well as sensitivity and specificity of gastric cancer tissues.
Li et al. [58] used a fluorescence hyperspectral imaging technique that can obtain spatial as well as spectral information about tissues. They also used a deep learning architecture combined with a spatial–spectral classification method to classify the obtained fluorescence hyperspectral images into non-cancerous lesions, precancerous lesions, and gastric cancer groups, and the accuracy, specificity, and sensitivity of the classification were all above 96%.

4.1.2. Brain Cancer

The most important aspect of cancer surgery in the brain is the accurate excision of the tumor part, which preserves the maximum amount of healthy tissue to ensure the postoperative safety of the patient. Fabelo et al. [87] employed a deep learning-related approach to process highly spectral images of living brain tissue to determine where the tumor is located, which can guide the surgeon in operation. Furthermore, the proposed visualization system can be adjusted at any time and can find the best classification threshold suitable for surgery. Manni et al. [90] they investigated techniques to identify tissue types during surgery and proposed a hybrid 3D–2D CNN architecture based on deep learning-extracted spatial to spectral features to classify normal brain tissue in a live HS image dataset along with glioblastoma tissue. In experiments, it has been shown that the 2D–3D hybrid network has greater precision in the detection of both tumors, vasculature, and healthy ones.
Ortega et al. [51] processed sections of human brain tissue by hematoxylin and eosin (H&E) staining and automatically distinguished glioblastoma (GB) from non-tumor tissue on the sections using HSI and a convolutional neural network (2D-CNN). Experiments were also performed on 13 patients, and the test shows that the mean sensitivity and eigenvalues of the automatic detection of pathological sections using convolutional neural networks on HSI images were higher than those of RGB, indicating the potential of HSI for histopathological analysis.

4.1.3. Head and Neck Cancer

Premature detection of brain tumors in the head and neck is critical to patient survival. Endoscopy is usually used to diagnose disease in the larynx. However, because of the differences in spectral characteristics before and after cancerous lesions, non-invasive detection was performed employing a hyper-spectral imager.
Maktabi et al. [44] assessed four supervised classifying of algorithms in their experiments: random forest; SVM; MLP; and K-nearest neighbor. HSI recordings of esophageal gastrectomy procedures in 11 patients distinguished between malignant tumors and healthy tissue. The ultimate goal is to obtain real-time tissue recognition techniques in esophagectomy and gastric pull-up procedures.
Zhou et al. [52] developed a novel polarization hyperspectral imaging technique. Normal regions and cancerous regions were distinguished on the Suomy red (H&E) stained head and neck cancer tissue sections. A machine learning framework was used for image classification as well. The outcomes reveal that the SVM classifier has shown the greatest classification precision for both the raw polarized hyperspectral data and the synthetic RGB image data.
Jeyaraj et al. [18] employed a partitioning deep-learning network based on regression for the diagnosis of oral cancer. Two chunking layers were used to label and classify regions of interest in hyperspectral images, and the final classification results were of higher quality than conventional diagnostics.
Halicek et al. [63] used a CNN categorizer for the classification of HSI on resected squamous cell cancer, goiter, and healthy tissue samples of the head and neck. It was also validated by hand annotation by a pathologist specializing in head and neck cancer. Initial results on 50 sufferers show the promise of HSI with DL for automated histological tagging of surgical markers in head and neck patients. Halicek et al. [103] used a DL approach rather than ROI to categorize an entire tissue specimen, using a convolutional network (CNN) to rapidly classify tissue at the carcinoma margins and normal tissue. Further, the potential of HSI-based label-free imaging methods for squamous cell carcinoma detection was investigated for surgical SCC detection. Both CNN and SVM + PCA + KNN were used to generate SCC prediction probability maps [49], respectively, to investigate the information provided by hyperspectral imaging and ML and CNN in head and neck cancer detection and to investigate the limitations of HSI-based and SCC detection.

4.1.4. Skin Cancer

Leon et al. [41] combined supervised and unsupervised methods to automatically segment the HS map into normal tissue together with pigmented skin lesions (PSL) by a K-means algorithm, and subsequently fed the segmented PSL pixels into a classification framework to classify them as benign as well as malignant tumors. This initial research illustrates in this preliminary study the possibility of the HSI technique to help distinguish benign and malignant PSL for dermatologists in routine clinical practices utilizing live non-invasive handheld devices.
Lindholm et al. [40] utilized a novel hand-held SICSURFIS spectral imager in a study to offer detailed spectral–spatial data. A novel SICSURFIS HSI-CNN system was proposed to effectively distinguish between abnormal and benign skin pathology (melanoma, pigmented nevi, dermatomal nevi, basal cell carcinomas, and squamous cell carcinomas), with good results even for complex skin surfaces.

4.1.5. Eye Diseases

Hadoux et al. [104] identified a noninvasive method for retinal imaging. Due to the significant innate ocular reflectance across individuals and within individuals, between retinal locations, pristine retinal reflectance spectra are useless for differentiating between cases and controls. So, the major axis for spectral variances within groups was removed, and the greatest discrepancy that could be observed among reflex spectra for cases and controls was observed at shorter wavelengths. This method plays an important role in screening for Alzheimer’s disease.

4.1.6. Colon Cancer

Colon cancer is the second most prevalent cancer globally, in addition to being the second cause of cancer-related mortality. Some localized, primary as well as early-stage colon cancers are mainly treated by complete removal of the tumor.
Jansen-Winkein et al. [92] used various machine learning methods in parallel with statistical analysis to assess the potency of HSI to distinguish the mucosa of a healthy colon adenoma from colorectal cancer. The experiments used the hyperbolic tangent function as the activation layer in a neural network to test the supervised classification framework RF/SVM with multilayer perception (MLP). Spatially informative classification was achieved on HSI data using a Gaussian filter with 96% accuracy in classifying mucosal cancer tissue.
Manni et al. [39] used the already proposed 3D-CNN in spectral space as well as the Hybrid Spectra Net (HybridSN) structure of 2D-CNN for classification in six isolated specimens for detection. It ended up with a slightly higher average AUC than the ResNet-based CNN and 3D-CNN. It was also shown that the HybridSN-CNN classification method can be used as an innovative technique for detecting colon cancer tissues and for image-guided colon cancer surgery.
Table 5. Comparison of different methods in the field of medical hyperspectral image applications and different achievements.
Table 5. Comparison of different methods in the field of medical hyperspectral image applications and different achievements.
ReferencesApplicationsDifferent MethodsDifferent Achievements
Machine LearningDeep learningAccuracySensitivity
[59]Stomach cancerSAM 90%
[81] 3DCNN93%
[35] CNN97.57%97.19%
[58] ResNet96.5%96.6%
[87]Brain cancer U-Net, 2D CNN, 1D DNN94%
[90] 3D + 2D CNN80%
[51] 2D CNN88%77%
[44]Head and neck cancerRandom forest, SVM, MLP, and K-nearest neighbor 63% (SVM)69% (SVM)
[52]SVM 93.5%
[18] Regression-deep CNN94.5%94%
[63] CNN96.4%96.8%
[41]Skin cancerK-means, SAM 87.5%
[40] CNN 93%
[104]Eye diseases
[92]Colon cancerMLP 86%
[39] 3D + 2D CNN 88%

4.2. Conclusion

HSI is still a developing medical imaging modality that can provide spatial and spectral information about some tissue samples. It reflects the quality features such as the size and shape of these samples, as well as their internal texture structure and composition differences, and these rich features provide room for the development of deep learning in medical hyperspectral imaging. Its non-invasive nature also plays a huge role in surgical guidance.
However, because of the fact that the advancement of deep learning is still at the stage of theoretical development and technical exploration in HSI image processing, its application in deep-learning hyperspectral medical diagnosis is limited by the bottleneck of HSI image processing. How to extract richer information at high spectral resolution and spatial resolution without losing some detailed information. It represents a challenge to be tackled in spectral image processing. It is also important to be able to acquire target information quickly and produce diagnostic results since it takes a lot of time from the preprocessing operation of hyperspectral images to the deep learning architecture and final results. As HSI continues to evolve, more experimental studies refine the algorithm and ensure the accountability of HSI analysis for routine clinical use.

5. Discussion

5.1. Hyperspectral Medical Image Processing vs. Hyperspectral Medical Image Diagnosis

The article discusses some commonly used hyperspectral imaging systems, and introduces the four main methods of spectral imaging: whiskbroom; push broom; staring; and snapshot, and now the new handheld hyperspectral imaging systems. Some common image pre-processing methods are summarized, and the uses of deep learning to classify, detect, and segment hyperspectral images are discussed. Finally, a brief summary of hyperspectral applications in the medical field is given.
Most researchers seek to achieve the best performance of deep learning methods and neural network architectures in a given domain. However, looking at the majority of medical image competitions, it is apparent that relying only on accurate model structures to obtain good analysis results is one-sided. In addition, different data pre-processing methods and data enhancement techniques are also necessary to obtain good scores. Therefore, the pre-processing of hyperspectral images is the most significant step in conducting the research and analysis of hyperspectral images. Since hyperspectral images are acquired in a high number of bands, the images contain a lot of useful information but also cause the images to contain superfluous information such as background and electrical noise, which makes the analysis of the images difficult. Therefore, most studies perform image preprocessing and spectral preprocessing before using the information from the images.
Although some common methods of the image, as well as spectral preprocessing, are discussed in the paper, some limitations of these methods exist in the application process, and the most suitable methods should be investigated in continuous practice. In the process of data collection, new preprocessing methods for Fourier transform and wavelet transform were found, which were capable of frequency domain and time domain conversion and showed good performance in analysis. These new data pretreatment methods provide a useful direction for future research in data analysis and a good basis for research development in other fields as well.

5.2. Challenges and Opportunities

The majority of current research has shown that spectral images can better extract diagnostic data of relevant tissue physiological, morphological, and compositional information. Although there is potential for the early diagnosis of diseases, there are still many limitations to the research that hinder the advancement of deep learning in the medical domain.
Firstly, research on medical images in deep learning has been conducted by fewer teams and in a narrower scope. Most of the research is to classify cells or some tissue samples by CNN to determine whether they have cancer or not. The development of some advanced algorithms can more accurately distinguish the categories of tissues, and the research in this area is still slow to develop in terms of applications.
Secondly, there is an extreme lack of data for the analysis of medical images in deep-learning applications. There are often limitations in the calibrated datasets, resulting in poor training and classification performance of experiments, and some publicly available datasets are scarce and small, and high-quality data calibration is lacking. Although this problem can be solved by data augmentation, there is a risk of overfitting. Nowadays, most of the general computer vision tasks are solved by applying smaller filters at a deeper level or by hyperparameter optimization.
Finally, most of the hyperspectral acquisition devices are now relatively large, with fewer applications for some handheld devices, and their application to deep learning, combined with algorithms for real-time medical analysis, has yet to be developed. With the maturity of the technology, it can realize the convenient situation where the analysis of tissue cases can be more quickly and safely applied in the clinic.
Although there are many problems that hinder the development of this field, as technology continues to develop, more and more teams will devote themselves to research in the field of medical hyper-spectroscopy and build more complete databases to develop more convenient and efficient imaging spectrometers. In addition, more scholars will study the method of combining spectral imaging with other biomedical imaging. This will make the analysis more comprehensive and help to interpret the parameter information of different biological tissues to replace the traditional diagnostic equipment.
Therefore, in the process of summarizing and integrating the articles, we should choose some meaningful articles and methods to describe and synthesize so that they can be more representative and provide some directions for later researchers. However, theoretical research is indispensable, and practical applicability is also important for the evolution of the field. Publicly available datasets facilitate the aggregation of research results. It is not surprising that studies in brain cancer diagnosis and diabetic podiatry have shown that complete, labeled datasets can increase the attention of researchers in this direction. It is expected that easily extractable data labels will become more readily accessible in the future.
In this review, a large volume of literature was collated, and through the examination of different focuses, some existing studies were divided into different categories, and the articles were summarized in different sections to present a clear framework that reflects the development of HSI and the development of the combination of different technologies. This development occurred through the research on the application of HSI in clinical analysis and operation guidance, to the analysis and judgment of medical HSI images combined with machine learning, and finally to the applications of deep learning. An increasing number of scholars have devoted themselves to this research, which has also greatly advanced the development of this field.

5.3. Datasets

Most of the authors did not give public experimental data and codes due to privacy or medical ethics’ principles. However, there are still a few institutions that provide relevant datasets. Although there are some hyperspectral image data belonging to animal tissues, it is useful to promote the research of algorithms and models. We have listed the collected public datasets whenever possible.
  • HSI Human Brain Database
Website: https://hsibraindatabase.iuma.ulpgc.es/ (accessed on 19 March 2018)
The links allow the download of the hyperspectral images of in vivo human brain employed in the paper [38]. This dataset has been used in several papers and is currently the most popular hyperspectral public dataset;
2.
MALDI rat liver anticancer drug spiked-in dataset (imzML)
Website: https://www.ebi.ac.uk/pride/archive/projects/PXD016146 (accessed on 11 June 2019)
3.
The Hyperspectral SRS and Fluorescence data
The links allow the download of the hyperspectral SRS and corresponding organelle fluorescence images used in training deep-learning prediction models using U-within-U-Net.
Links 2 and 3 are from reference [95]. The authors also shared the source code:
Source code: https://github.com/B-Manifold/pytorch_fnet_UwUnet (accessed on 29 December 2020)
4.
A clinically translatable hyperspectral endoscopy (HySE) system for imaging the gastrointestinal tract
Website: https://www.repository.cam.ac.uk/handle/1810/270691 (accessed on 17 January 2018)
The links allow the download of the raw and processed data of simulation and experiments in the paper (a clinically translatable hyperspectral endoscopy (HySE) system for imaging the gastrointestinal tract). This dataset was obtained from reference [61];
5.
Parallel Implementations Assessment of a Spatial–Spectral Classifier for Hyperspectral Clinical Applications
The links allow the download of HS images taken from dermatological interventions. This is a very new dataset provided by Himar Fabelo et al. (the authors of references [38,42,47,49,51]);
6.
Microscopic Hyperspectral Choledoch Dataset
The links allow the download of a dataset for both microscopy hyperspectral and color images of cholangiocarcinoma. This dataset is presented in reference [100]. Due to the upload space limitation, providers only uploaded part of the data. The original files are located here: http://bio-hsi.ecnu.edu.cn/. This dataset requires a request to obtain all data;
7.
Multispectral Imaging Dataset of Colorectal tissue
The links allow the download of images of two benign abnormality classes along with normal and cancerous classes. The dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. This dataset is presented in reference [51].

Author Contributions

Conceptualization, H.Y. and T.X.; methodology, R.C. and H.Y.; software, X.X. and X.C.; investigation, X.C. and J.C.; resources, H.Y.; data curation, K.Y. and J.C.; writing—original draft preparation, R.C.; writing—review and editing, R.C., H.Y., and T.X.; visualization, X.X.; supervision, T.X.; project administration, H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Natural Science Foundation of Jilin Provincial Science and Technology Department under (No: 20220101133JC).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the faculty for providing excellent research directions and guidance during the writing of the paper, and the research participants for providing support to help the authors complete the paper. All individuals included in this section have agreed to acknowledge.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peyghambari, S.; Zhang, Y. Hyperspectral remote sensing in lithological mapping, mineral exploration, and environmental geology: An updated review. J. Appl. Rem. Sens. 2021, 15, 031501. [Google Scholar] [CrossRef]
  2. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef] [Green Version]
  3. Wang, C.; Liu, B.; Liu, L.; Zhu, Y.; Hou, J.; Liu, P.; Li, X. A review of deep learning used in the hyperspectral image analysis for agriculture. Artif. Intell. Rev. 2021, 54, 5205–5253. [Google Scholar] [CrossRef]
  4. Seyrek, E.C.; Uysal, M. Classification of Hyperspectral Images with CNN in Agricultural Lands. Biol. Life Sci. Forum 2021, 3, 6. [Google Scholar]
  5. Lin, Y.; Ling, B.W.-K.; Hu, L.; Zheng, Y.; Xu, N.; Zhou, X.; Wang, X. Hyperspectral Image Enhancement by Two Dimensional Quaternion Valued Singular Spectrum Analysis for Object Recognition. Remote Sens. 2021, 13, 405. [Google Scholar] [CrossRef]
  6. Mahlein, A.-K.; Oerke, E.-C.; Steiner, U.; Dehne, H.-W. Recent advances in sensing plant diseases for precision crop protection. Eur. J. Plant Pathol. 2012, 133, 197–209. [Google Scholar] [CrossRef]
  7. Usha, K.; Singh, B. Potential applications of remote sensing in horticulture—A review. Sci. Hortic. 2013, 153, 71–83. [Google Scholar] [CrossRef]
  8. Huang, M.; Wan, X.; Zhang, M.; Zhu, Q. Detection of insect-damaged vegetable soybeans using hyperspectral transmittance image. J. Food Eng. 2013, 116, 45–49. [Google Scholar] [CrossRef]
  9. Temiz, H.T.; Ulaş, B. A Review of Recent Studies Employing Hyperspectral Imaging for the Determination of Food Adulteration. Photochem 2021, 1, 125–146. [Google Scholar] [CrossRef]
  10. Zhu, M.; Huang, D.; Hu, X.; Tong, W.; Han, B.; Tian, J.; Luo, H. Application of hyperspectral technology in detection of agricultural products and food: A Review. Food Sci. Nutr. 2020, 8, 5206–5214. [Google Scholar] [CrossRef]
  11. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef] [PubMed]
  12. Wei, L.; Meng, L.; Tianhong, C.; Zhaoyao, C.; Ran, T. Application of a hyperspectral image in medical field: A review. J. Image Graph. 2021, 26, 1764–1785. [Google Scholar]
  13. Li, Q.; He, X.; Wang, Y.; Liu, H.; Xu, D.; Guo, F. Review of spectral imaging technology in biomedical engineering: Achievements and challenges. J. Biomed. Opt. 2013, 18, 100901. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Lu, G.; Qin, X.; Wang, D.; Muller, S.; Zhang, H.; Chen, A.; Chen, Z.G.; Fei, B. Hyperspectral imaging of neoplastic progression in a mouse model of oral carcinogenesis. In Medical Imaging 2016: Biomedical Applications in Molecular, Structural, and Functional Imaging; SPIE: San Diego, CA, USA, 2016. [Google Scholar]
  15. Madooei, A.; Abdlaty, R.M.; Doerwald-Munoz, L.; Hayward, J.; Drew, M.S.; Fang, Q.; Zerubia, J. Hyperspectral Image Processing for Detection and Grading of Skin Erythema; Styner, M.A., Angelini, E.D., Eds.; SPIE: Orlando, FL, USA, 2017; p. 1013322. [Google Scholar]
  16. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  17. Xiang, L.; Li, W.; Xiaodong, X.; Wei, H. Cell classification using convolutional neural networks in medical hyperspectral imagery. In Proceedings of the 2017 2nd International Conference on Image, Vision and Computing (ICIVC), Chengdu, China, 2–4 June 2017; pp. 501–504. [Google Scholar]
  18. Jeyaraj, P.R.; Samuel Nadar, E.R. Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm. J. Cancer Res. Clin. Oncol. 2019, 145, 829–837. [Google Scholar] [CrossRef]
  19. Wei-hong, H.; Yong-hong, H.; Peng, L.; Yi, Z.; Yong-hong, S.; Rui-sheng, L.; Nan, Z.; Hui, M. Development of imaging system for optical coherence tomography in ophthalmology. Opt. Precis. Eng. 2008, 16, 438–443. [Google Scholar]
  20. Khan, U.; Paheding, S.; Elkin, C.P.; Devabhaktuni, V.K. Trends in Deep Learning for Medical Hyperspectral Image Analysis. IEEE Access 2021, 9, 79534–79548. [Google Scholar] [CrossRef]
  21. Fei, B. Hyperspectral imaging in medical applications. In Data Handling in Science and Technology; Elsevier: Amsterdam, The Netherlands, 2019; Volume 32, pp. 523–565. ISBN 978-0-444-63977-6. [Google Scholar]
  22. Sellar, R.G.; Boreman, G.D. Classification of imaging spectrometers for remote sensing applications. Opt. Eng. 2005, 44, 013602. [Google Scholar]
  23. Kulcke, A.; Holmer, A.; Wahl, P.; Siemers, F.; Wild, T.; Daeschlein, G. A compact hyperspectral camera for measurement of perfusion parameters in medicine. Biomed. Eng./Biomed. Tech. 2018, 63, 519–527. [Google Scholar] [CrossRef]
  24. Holmer, A.; Marotz, J.; Wahl, P.; Dau, M.; Kämmerer, P.W. Hyperspectral imaging in perfusion and wound diagnostics—Methods and algorithms for the determination of tissue parameters. Biomed. Eng./Biomed. Tech. 2018, 63, 547–556. [Google Scholar] [CrossRef] [PubMed]
  25. Aiazzi, B.; Alparone, L.; Barducci, A.; Baronti, S.; Marcoionni, P.; Pippi, I.; Selva, M. Noise modelling and estimation of hyperspectral data from airborne imaging spectrometers. Ann. Geophys. 2006, 49. [Google Scholar]
  26. Jian, D. Research of Tumor Tissue Classification based on Medical Hyperspectral Imaging Analysis. Ph.D. dissertation, Xi’an Institute of Optics & Precision Mechanics, Chinese Academy of Sciences, Xi’an, China, December 2018. [Google Scholar]
  27. Gupta, N. Development of Staring Hyperspectral Imagers. In Proceedings of the 2011 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington, DC, USA, 11–13 October 2011. [Google Scholar]
  28. Balas, C.; Pappas, C.; Epitropou, G. Multi/hyper-spectral imaging. In Handbook of Biomedical Optics; CRC Press: New York, NY, USA, 2011; pp. 131–164. [Google Scholar]
  29. Gao, L.; Kester, R.T.; Tkaczyk, T.S. Compact Image Slicing Spectrometer (ISS) for hyperspectral fluorescence microscopy. Opt. Express 2009, 17, 12293. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Weitzel, L.; Krabbe, A.; Kroker, H.; Thatte, N.; Tacconi-Garman, L.E.; Cameron, M.; Genzel, R. 3D: The next generation near-infrared imaging spectrometer. Astron. Astrophys. Suppl. Ser. 1996, 119, 531–546. [Google Scholar] [CrossRef] [Green Version]
  31. Halicek, M.; Little, J.V.; Wang, X.; Patel, M.R.; Griffith, C.C.; Chen, A.Y.; Fei, B. Tumor margin classification of head and neck cancer using hyperspectral imaging and convolutional neural networks. In Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling; Webster, R.J., Fei, B., Eds.; SPIE: Houston, TX, USA, 2018; p. 4. [Google Scholar]
  32. Bengs, M.; Gessert, N.; Laffers, W.; Eggert, D.; Westermann, S.; Mueller, N.A.; Gerstner, A.O.H.; Betz, C.; Schlaefer, A. Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo Hyperspectral Tumor Type Classification. arXiv 2020, arXiv:2007.01042. [Google Scholar]
  33. Halicek, M.; Little, J.V.; Wang, X.; Chen, A.Y.; Fei, B. Optical biopsy of head and neck cancer using hyperspectral imaging and convolutional neural networks. J. Biomed. Opt. 2019, 24, 1. [Google Scholar] [CrossRef] [Green Version]
  34. Sommer, F.; Sun, B.; Fischer, J.; Goldammer, M.; Thiele, C.; Malberg, H.; Markgraf, W. Hyperspectral Imaging during Normothermic Machine Perfusion—A Functional Classification of Ex Vivo Kidneys Based on Convolutional Neural Networks. Biomedicines 2022, 10, 397. [Google Scholar] [CrossRef]
  35. Hu, B.; Du, J.; Zhang, Z.; Wang, Q. Tumor tissue classification based on micro-hyperspectral technology and deep learning. Biomed. Opt. Express 2019, 10, 6370. [Google Scholar] [CrossRef]
  36. Huang, Q.; Li, W.; Xie, X. Convolutional neural network for medical hyperspectral image classification with kernel fusion. In Proceedings of the BIBE 2018; International Conference on Biological Information and Biomedical Engineering, Shanghai, China, 6–8 June 2018. [Google Scholar]
  37. Barberio, M.; Collins, T.; Bencteux, V.; Nkusi, R.; Felli, E.; Viola, M.G.; Marescaux, J.; Hostettler, A.; Diana, M. Deep Learning Analysis of In Vivo Hyperspectral Images for Automated Intraoperative Nerve Detection. Diagnostics 2021, 11, 1508. [Google Scholar] [CrossRef]
  38. Fabelo, H.; Ortega, S.; Ravi, D.; Kiran, B.R.; Sosa, C.; Bulters, D.; Callicó, G.M.; Bulstrode, H.; Szolna, A.; Piñeiro, J.F.; et al. Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations. PLoS ONE 2018, 13, e0193721. [Google Scholar] [CrossRef] [Green Version]
  39. Manni, F.; Fonolla, R.; van der Sommen, F.; Zinger, S.; Shan, C.; Kho, E.; de Koning, S.B.; Ruers, T.; de With, P.H.N. Hyperspectral imaging for colon cancer classification in surgical specimens: Towards optical biopsy during image-guided surgery. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1169–1173. [Google Scholar]
  40. Lindholm, V.; Raita-Hakola, A.-M.; Annala, L.; Salmivuori, M.; Jeskanen, L.; Saari, H.; Koskenmies, S.; Pitkänen, S.; Pölönen, I.; Isoherranen, K.; et al. Differentiating Malignant from Benign Pigmented or Non-Pigmented Skin Tumours—A Pilot Study on 3D Hyperspectral Imaging of Complex Skin Surfaces and Convolutional Neural Networks. JCM 2022, 11, 1914. [Google Scholar] [CrossRef] [PubMed]
  41. Leon, R.; Martinez-Vega, B.; Fabelo, H.; Ortega, S.; Melian, V.; Castaño, I.; Carretero, G.; Almeida, P.; Garcia, A.; Quevedo, E.; et al. Non-Invasive Skin Cancer Diagnosis Using Hyperspectral Imaging for In-Situ Clinical Support. JCM 2020, 9, 1662. [Google Scholar] [CrossRef] [PubMed]
  42. Ortega, S.; Halicek, M.; Fabelo, H.; Guerra, R.; Lopez, C.; Lejeune, M.; Godtliebsen, F.; Callico, G.M.; Fei, B. Hyperspectral imaging and deep learning for the detection of breast cancer cells in digitized histological images. In Medical Imaging 2020: Digital Pathology; Tomaszewski, J.E., Ward, A.D., Eds.; SPIE: Houston, TX, USA, 2020; p. 30. [Google Scholar]
  43. Ma, L.; Lu, G.; Wang, D.; Wang, X.; Chen, Z.G.; Muller, S.; Chen, A.; Fei, B. Deep Learning Based Classification for Head and neck Cancer Detection with Hyperspectral Imaging in An Animal Model; Krol, A., Gimi, B., Eds.; SPIE: Orlando, FL, USA, 2017; p. 101372G. [Google Scholar]
  44. Maktabi, M.; Köhler, H.; Ivanova, M.; Jansen-Winkeln, B.; Takoh, J.; Niebisch, S.; Rabe, S.M.; Neumuth, T.; Gockel, I.; Chalopin, C. Tissue classification of oncologic esophageal resectates based on hyperspectral data. Int. J. CARS 2019, 14, 1651–1661. [Google Scholar] [CrossRef]
  45. Grigoroiu, A.; Yoon, J.; Bohndiek, S.E. Deep learning applied to hyperspectral endoscopy for online spectral classification. Sci. Rep. 2020, 10, 3947. [Google Scholar] [CrossRef] [Green Version]
  46. Manni, F.; van der Sommen, F.; Zinger, S.; Shan, C.; Holthuizen, R.; Lai, M.; Buström, G.; Hoveling, R.J.M.; Edström, E.; Elmi-Terander, A.; et al. Hyperspectral Imaging for Skin Feature Detection: Advances in Markerless Tracking for Spine Surgery. Appl. Sci. 2020, 10, 4078. [Google Scholar] [CrossRef]
  47. Ortega, S.; Fabelo, H.; Camacho, R.; de la Luz Plaza, M.; Callicó, G.M.; Sarmiento, R. Detecting brain tumor in pathological slides using hyperspectral imaging. Biomed. Opt. Express 2018, 9, 818. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Halicek, M.; Dormer, J.D.; Little, J.V.; Chen, A.Y.; Fei, B. Tumor detection of the thyroid and salivary glands using hyperspectral imaging and deep learning. Biomed. Opt. Express 2020, 11, 1383. [Google Scholar] [CrossRef]
  49. Halicek, M.; Fabelo, H.; Ortega, S.; Little, J.V.; Wang, X.; Chen, A.Y.; Callicó, G.M.; Myers, L.; Sumer, B.; Fei, B. Cancer detection using hyperspectral imaging and evaluation of the superficial tumor margin variance with depth. In Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling; Fei, B., Linte, C.A., Eds.; SPIE: San Diego, CA, USA, 2019; p. 45. [Google Scholar]
  50. Lin, J.; Clancy, N.T.; Sun, X.; Qi, J.; Janatka, M.; Stoyanov, D.; Elson, D.S. Probe-Based Rapid Hybrid Hyperspectral and Tissue Surface Imaging Aided by Fully Convolutional Networks. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016; Springer International Publishing: Cham, Switzerland, 2016; Volume 9902, pp. 414–422. ISBN 978-3-319-46725-2. [Google Scholar]
  51. Ortega, S.; Halicek, M.; Fabelo, H.; Camacho, R.; de la Plaza, M.L.; Godtliebsen, F.M.; Callicó, G.; Fei, B. Hyperspectral Imaging for the Detection of Glioblastoma Tumor Cells in H&E Slides Using Convolutional Neural Networks. Sensors 2020, 20, 1911. [Google Scholar] [CrossRef] [Green Version]
  52. Zhou, X.; Ma, L.; Brown, W.; Little, J.; Chen, A.; Myers, L.; Sumer, B.; Fei, B. Automatic detection of head and neck squamous cell carcinoma on pathologic slides using polarized hyperspectral imaging and machine learning. In Medical Imaging 2021: Digital Pathology; Tomaszewski, J.E., Ward, A.D., Eds.; SPIE: Bellingham, WA, USA, 2021; p. 23. [Google Scholar]
  53. Lu, G.; Little, J.V.; Wang, X.; Zhang, H.; Patel, M.R.; Griffith, C.C.; El-Deiry, M.W.; Chen, A.Y.; Fei, B. Detection of Head and Neck Cancer in Surgical Specimens Using Quantitative Hyperspectral Imaging. Clin. Cancer Res. 2017, 23, 5426–5436. [Google Scholar] [CrossRef] [Green Version]
  54. Trajanovski, S.; Shan, C.; Weijtmans, P.J.C.; de Koning, S.G.B.; Ruers, T.J.M. Tongue Tumor Detection in Hyperspectral Images Using Deep Learning Semantic Segmentation. IEEE Trans. Biomed. Eng. 2021, 68, 1330–1340. [Google Scholar] [CrossRef] [PubMed]
  55. Wang, Q.; Sun, L.; Wang, Y.; Zhou, M.; Hu, M.; Chen, J.; Wen, Y.; Li, Q. Identification of Melanoma From Hyperspectral Pathology Image Using 3D Convolutional Networks. IEEE Trans. Med. Imaging 2021, 40, 218–227. [Google Scholar] [CrossRef]
  56. Seidlitz, S.; Sellner, J.; Odenthal, J.; Özdemir, B.; Studier-Fischer, A.; Knödler, S.; Ayala, L.; Adler, T.; Kenngott, H.G.; Tizabi, M.; et al. Robust deep learning-based semantic organ segmentation in hyperspectral images. Med. Image Anal. 2022, 80, 102488. [Google Scholar] [CrossRef] [PubMed]
  57. Cervantes-Sanchez, F.; Maktabi, M.; Köhler, H.; Sucher, R.; Rayes, N.; Avina-Cervantes, J.G.; Cruz-Aceves, I.; Chalopin, C. Automatic tissue segmentation of hyperspectral images in liver and head neck surgeries using machine learning. AIS 2021, 1, 22–37. [Google Scholar] [CrossRef]
  58. Li, Y.; Deng, L.; Yang, X.; Liu, Z.; Zhao, X.; Huang, F.; Zhu, S.; Chen, X.; Chen, Z.; Zhang, W. Early diagnosis of gastric cancer based on deep learning combined with the spectral-spatial classification method. Biomed. Opt. Express 2019, 10, 4999. [Google Scholar] [CrossRef] [PubMed]
  59. Liu, N.; Guo, Y.; Jiang, H.; Yi, W. Gastric cancer diagnosis using hyperspectral imaging with principal component analysis and spectral angle mapper. J. Biomed. Opt. 2020, 25, 1. [Google Scholar] [CrossRef]
  60. Ma, L.; Halicek, M.; Fei, B. In vivo cancer detection in animal model using hyperspectral image classification with wavelet feature extraction. In Medical Imaging 2020: Biomedical Applications in Molecular, Structural, and Functional Imaging; Gimi, B.S., Krol, A., Eds.; SPIE: Houston, TX, USA, 2020; p. 48. [Google Scholar]
  61. Dremin, V.; Marcinkevics, Z.; Zherebtsov, E.; Popov, A.; Grabovskis, A.; Kronberga, H.; Geldnere, K.; Doronin, A.; Meglinski, I.; Bykov, A. Skin Complications of Diabetes Mellitus Revealed by Polarized Hyperspectral Imaging and Machine Learning. IEEE Trans. Med. Imaging 2021, 40, 1207–1216. [Google Scholar] [CrossRef]
  62. Köhler, H.; Kulcke, A.; Maktabi, M.; Moulla, Y.; Jansen-Winkeln, B.; Barberio, M.; Diana, M.; Gockel, I.; Neumuth, T.; Chalopin, C. Laparoscopic system for simultaneous high-resolution video and rapid hyperspectral imaging in the visible and near-infrared spectral range. J. Biomed. Opt. 2020, 25, 086004. [Google Scholar] [CrossRef]
  63. Halicek, M.; Lu, G.; Little, J.V.; Wang, X.; Patel, M.; Griffith, C.C.; El-Deiry, M.W.; Chen, A.Y.; Fei, B. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J. Biomed. Opt. 2017, 22, 060503. [Google Scholar] [CrossRef]
  64. Akbari, H.; Halig, L.V.; Zhang, H.; Wang, D.; Chen, Z.G.; Fei, B. Detection of cancer metastasis using a novel macroscopic hyperspectral method. In Medical Imaging 2012: Biomedical Applications in Molecular, Structural, and Functional Imaging; SPIE: San Diego, CA, USA, 2012; Volume 8317, pp. 299–305. [Google Scholar]
  65. Lu, G.; Halig, L.; Wang, D.; Qin, X.; Chen, Z.G.; Fei, B. Spectral-spatial classification for noninvasive cancer detection using hyperspectral imaging. J. Biomed. Opt. 2014, 19, 106004. [Google Scholar] [CrossRef] [Green Version]
  66. Raita-Hakola, A.-M.; Annala, L.; Lindholm, V.; Trops, R.; Näsilä, A.; Saari, H.; Ranki, A.; Pölönen, I. FPI Based Hyperspectral Imager for the Complex Surfaces—Calibration, Illumination and Applications. Sensors 2022, 22, 3420. [Google Scholar] [CrossRef] [PubMed]
  67. Zherebtsov, E.; Dremin, V.; Popov, A.; Doronin, A.; Kurakina, D.; Kirillin, M.; Meglinski, I.; Bykov, A. Hyperspectral imaging of human skin aided by artificial neural networks. Biomed. Opt. Express 2019, 10, 3545–3559. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Baig, N.; Fabelo, H.; Ortega, S.; Callico, G.M.; Alirezaie, J.; Umapathy, K. Empirical Mode Decomposition Based Hyperspectral Data Analysis for Brain Tumor Classification. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EM0BC), Guadalajara, Mexico, 1–5 November 2021; pp. 2274–2277. [Google Scholar]
  69. Fei, B.; Lu, G.; Halicek, M.T.; Wang, X.; Zhang, H.; Little, J.V.; Magliocca, K.R.; Patel, M.; Griffith, C.C.; El-Deiry, M.W. Label-free hyperspectral imaging and quantification methods for surgical margin assessment of tissue specimens of cancer patients. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Korea, 11–15 July 2017; pp. 4041–4045. [Google Scholar]
  70. Lu, G.; Wang, D.; Qin, X.; Halig, L.; Muller, S.; Zhang, H.; Chen, A.; Pogue, B.W.; Chen, Z.G.; Fei, B. Framework for hyperspectral image processing and quantification for cancer detection during animal tumor surgery. J. Biomed. Opt. 2015, 20, 126012. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. Kong, S.G.; Du, Z.; Martin, M.; Vo-Dinh, T. Hyperspectral fluorescence image analysis for use in medical diagnostics. In Advanced Biomedical and Clinical Diagnostic Systems III; SPIE: San Jose, CA, USA, 2005; Volume 5692, pp. 21–28. [Google Scholar]
  72. Markgraf, W.; Lilienthal, J.; Feistel, P.; Thiele, C.; Malberg, H. Algorithm for mapping kidney tissue water content during normothermic machine perfusion using hyperspectral imaging. Algorithms 2020, 13, 289. [Google Scholar] [CrossRef]
  73. Mou, L.; Saha, S.; Hua, Y.; Bovolo, F.; Bruzzone, L.; Zhu, X.X. Deep Reinforcement Learning for Band Selection in Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  74. Sun, W.; Du, Q. Hyperspectral Band Selection: A Review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 118–139. [Google Scholar] [CrossRef]
  75. Wang, Q.; Li, Q.; Li, X. Hyperspectral Band Selection via Adaptive Subspace Partition Strategy. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 4940–4950. [Google Scholar] [CrossRef]
  76. Sun, W.; Peng, J.; Yang, G.; Du, Q. Fast and Latent Low-Rank Subspace Clustering for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2020, 58, 3906–3915. [Google Scholar] [CrossRef]
  77. Li, W.; Du, Q. Gabor-filtering-based nearest regularized subspace for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1012–1022. [Google Scholar] [CrossRef]
  78. Li, W.; Chen, C.; Su, H.; Du, Q. Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  79. Yu, H.; Yang, J. A direct LDA algorithm for high-dimensional data—With application to face recognition. Pattern Recognit. 2001, 34, 2067–2070. [Google Scholar] [CrossRef] [Green Version]
  80. Huang, Q.; Li, W.; Zhang, B.; Li, Q.; Tao, R.; Lovell, N.H. Blood Cell Classification Based on Hyperspectral Imaging With Modulated Gabor and CNN. IEEE J. Biomed. Health Inform. 2020, 24, 160–170. [Google Scholar] [CrossRef] [PubMed]
  81. Collins, T.; Maktabi, M.; Barberio, M.; Bencteux, V.; Jansen-Winkeln, B.; Chalopin, C.; Marescaux, J.; Hostettler, A.; Diana, M.; Gockel, I. Automatic Recognition of Colon and Esophagogastric Cancer with Machine Learning and Hyperspectral Imaging. Diagnostics 2021, 11, 1810. [Google Scholar] [CrossRef]
  82. Torti, E.; Florimbi, G.; Castelli, F.; Ortega, S.; Fabelo, H.; Callicó, G.; Marrero-Martin, M.; Leporati, F. Parallel K-Means Clustering for Brain Cancer Detection Using Hyperspectral Images. Electronics 2018, 7, 283. [Google Scholar] [CrossRef] [Green Version]
  83. Nathan, M.; Kabatznik, A.S.; Mahmood, A. Hyperspectral imaging for cancer detection and classification. In Proceedings of the 2018 3rd Biennial South African Biomedical Engineering Conference (SAIBMEC), Stellenbosch, South Africa, 4–6 April 2018; pp. 1–4. [Google Scholar]
  84. Wang, Q.; Wang, J.; Zhou, M.; Li, Q.; Wen, Y.; Chu, J. A 3D attention networks for classification of white blood cells from microscopy hyperspectral images. Opt. Laser Technol. 2021, 139, 106931. [Google Scholar] [CrossRef]
  85. Zhang, D.; Zhang, J.; Wang, Z.; Sun, M. Tongue colour and coating prediction in traditional Chinese medicine based on visible hyperspectral imaging. IET Image Process 2019, 13, 2265–2270. [Google Scholar] [CrossRef]
  86. Wei, X.; Li, W.; Zhang, M.; Li, Q. Medical Hyperspectral Image Classification Based on End-to-End Fusion Deep Neural Network. IEEE Trans. Instrum. Meas. 2019, 68, 4481–4492. [Google Scholar] [CrossRef]
  87. Fabelo, H.; Halicek, M.; Ortega, S.; Shahedi, M.; Szolna, A.; Piñeiro, J.; Sosa, C.; O’Shanahan, A.; Bisshopp, S.; Espino, C.; et al. Deep Learning-Based Framework for In Vivo Identification of Glioblastoma Tumor using Hyperspectral Images of Human Brain. Sensors 2019, 19, 920. [Google Scholar] [CrossRef]
  88. Masood, K.; Rajpoot, N.; Rajpoot, K.; Qureshi, H. Hyperspectral Colon Tissue Classification using Morphological Analysis. In Proceedings of the 2006 International Conference on Emerging Technologies, Peshawar, Pakistan, 13–14 November 2006; pp. 735–741. [Google Scholar]
  89. Md Noor, S.; Ren, J.; Marshall, S.; Michael, K. Hyperspectral Image Enhancement and Mixture Deep-Learning Classification of Corneal Epithelium Injuries. Sensors 2017, 17, 2644. [Google Scholar] [CrossRef] [Green Version]
  90. Manni, F.; van der Sommen, F.; Fabelo, H.; Zinger, S.; Shan, C.; Edström, E.; Elmi-Terander, A.; Ortega, S.; Marrero Callicó, G.; de With, P.H.N. Hyperspectral Imaging for Glioblastoma Surgery: Improving Tumor Identification Using a Deep Spectral-Spatial Approach. Sensors 2020, 20, 6955. [Google Scholar] [CrossRef]
  91. Urbanos, G.; Martín, A.; Vázquez, G.; Villanueva, M.; Villa, M.; Jimenez-Roldan, L.; Chavarrías, M.; Lagares, A.; Juárez, E.; Sanz, C. Supervised Machine Learning Methods and Hyperspectral Imaging Techniques Jointly Applied for Brain Cancer Classification. Sensors 2021, 21, 3827. [Google Scholar] [CrossRef] [PubMed]
  92. Jansen-Winkeln, B.; Barberio, M.; Chalopin, C.; Schierle, K.; Diana, M.; Köhler, H.; Gockel, I.; Maktabi, M. Feedforward Artificial Neural Network-Based Colorectal Cancer Detection Using Hyperspectral Imaging: A Step towards Automatic Optical Biopsy. Cancers 2021, 13, 967. [Google Scholar] [CrossRef] [PubMed]
  93. Hegde, R.B.; Prasad, K.; Hebbar, H.; Singh, B.M.K. Comparison of traditional image processing and deep learning approaches for classification of white blood cells in peripheral blood smear images. Biocybern. Biomed. Eng. 2019, 39, 382–392. [Google Scholar] [CrossRef]
  94. Książek, K.; Romaszewski, M.; Głomb, P.; Grabowski, B.; Cholewa, M. Blood Stain Classification with Hyperspectral Imaging and Deep Neural Networks. Sensors 2020, 20, 6666. [Google Scholar] [CrossRef] [PubMed]
  95. Manifold, B.; Men, S.; Hu, R.; Fu, D. A versatile deep learning architecture for classification and label-free prediction of hyperspectral images. Nat. Mach. Intell. 2021, 3, 306–315. [Google Scholar] [CrossRef]
  96. Weijtmans, P.J.C.; Shan, C.; Tan, T.; Brouwer de Koning, S.G.; Ruers, T.J.M. A Dual Stream Network for Tumor Detection in Hyperspectral Images. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 1256–1259. [Google Scholar]
  97. Bengs, M.; Westermann, S.; Gessert, N.; Eggert, D.; Gerstner, A.O.H.; Mueller, N.A.; Betz, C.; Laffers, W.; Schlaefer, A. Spatio-spectral deep learning methods for in-vivo hyperspectral laryngeal cancer detection. In Medical Imaging 2020: Computer-Aided Diagnosis; SPIE: Houston, TX, USA, 2020. [Google Scholar]
  98. Garifullin, A.; Koobi, P.; Ylitepsa, P.; Adjers, K.; Hauta-Kasari, M.; Uusitalo, H.; Lensu, L. Hyperspectral Image Segmentation of Retinal Vasculature, Optic Disc and Macula. In Proceedings of the 2018 Digital Image Computing: Techniques and Applications (DICTA), Canberra, Australia, 10–13 December 2018; pp. 1–5. [Google Scholar]
  99. Trajanovski, S.; Shan, C.; Weijtmans, P.J.C. Tumor Semantic Segmentation in Hyperspectral Images using Deep Learning. In Proceedings of the International Conference on Medical Imaging with Deep Learning--Extended Abstract Track, London, UK, April 2019. [Google Scholar]
  100. Yun, B.; Wang, Y.; Chen, J.; Wang, H.; Shen, W.; Li, Q. SpecTr: Spectral Transformer for Hyperspectral Pathology Image Segmentation. arXiv 2021, arXiv:2103.03604. [Google Scholar]
  101. Pawlowski, N.; Bhooshan, S.; Ballas, N.; Ciompi, F.; Glocker, B.; Drozdzal, M. Needles in Haystacks: On Classifying Tiny Objects in Large Images. arXiv 2020, arXiv:1908.06037. [Google Scholar]
  102. Dietrich, M.; Seidlitz, S.; Schreck, N.; Wiesenfarth, M.; Godau, P.; Tizabi, M.; Sellner, J.; Marx, S.; Knödler, S.; Allers, M.M.; et al. Machine learning-based analysis of hyperspectral images for automated sepsis diagnosis. arXiv 2021, arXiv:2106.08445. [Google Scholar]
  103. Halicek, M.; Dormer, J.D.; Little, J.V.; Chen, A.Y.; Myers, L.; Sumer, B.D.; Fei, B. Hyperspectral Imaging of Head and Neck Squamous Cell Carcinoma for Cancer Margin Detection in Surgical Specimens from 102 Patients Using Deep Learning. Cancers 2019, 11, 1367. [Google Scholar] [CrossRef] [Green Version]
  104. Hadoux, X.; Hui, F.; Lim, J.K.H.; Masters, C.L.; Pébay, A.; Chevalier, S.; Ha, J.; Loi, S.; Fowler, C.J.; Rowe, C.; et al. Non-invasive in vivo hyperspectral imaging of the retina for potential biomarker use in Alzheimer’s disease. Nat. Commun. 2019, 10, 4227. [Google Scholar] [CrossRef]
Figure 1. Spectral data cube.
Figure 1. Spectral data cube.
Sensors 22 09790 g001
Figure 2. Schematic diagram of push-scan hyperspectral imaging system.
Figure 2. Schematic diagram of push-scan hyperspectral imaging system.
Sensors 22 09790 g002
Figure 3. Typical spectral imaging methods. (a) Whiskbroom. (b) Push broom. (c) Staring.
Figure 3. Typical spectral imaging methods. (a) Whiskbroom. (b) Push broom. (c) Staring.
Sensors 22 09790 g003
Figure 4. Push broom TIVITA tissue camera, (A) schematic diagram of push broom spectroscopy system; (B) hyperspectral camera mounted on a medical vehicle [23].
Figure 4. Push broom TIVITA tissue camera, (A) schematic diagram of push broom spectroscopy system; (B) hyperspectral camera mounted on a medical vehicle [23].
Sensors 22 09790 g004
Figure 5. SICSURFIS handheld hyperspectral imaging system [40]. The article [66] describes the principle and testing of the first phase of a three-stage pilot of this imaging system, focused on the intricate skin surface. The device is still a prototype and ongoing refinement is needed before it can be used for clinical applications.
Figure 5. SICSURFIS handheld hyperspectral imaging system [40]. The article [66] describes the principle and testing of the first phase of a three-stage pilot of this imaging system, focused on the intricate skin surface. The device is still a prototype and ongoing refinement is needed before it can be used for clinical applications.
Sensors 22 09790 g005
Figure 6. Compact handheld hyperspectral imaging system [61]. The proposed imaging system [67] was designed to enable quantitative diagnosis and visualization of human skin. This includes 2-dimensional mapping of skin chromophores, mapping of hemo–oxygen dynamics, and assessment of skin perfusion.
Figure 6. Compact handheld hyperspectral imaging system [61]. The proposed imaging system [67] was designed to enable quantitative diagnosis and visualization of human skin. This includes 2-dimensional mapping of skin chromophores, mapping of hemo–oxygen dynamics, and assessment of skin perfusion.
Sensors 22 09790 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Cui, R.; Yu, H.; Xu, T.; Xing, X.; Cao, X.; Yan, K.; Chen, J. Deep Learning in Medical Hyperspectral Images: A Review. Sensors 2022, 22, 9790. https://doi.org/10.3390/s22249790

AMA Style

Cui R, Yu H, Xu T, Xing X, Cao X, Yan K, Chen J. Deep Learning in Medical Hyperspectral Images: A Review. Sensors. 2022; 22(24):9790. https://doi.org/10.3390/s22249790

Chicago/Turabian Style

Cui, Rong, He Yu, Tingfa Xu, Xiaoxue Xing, Xiaorui Cao, Kang Yan, and Jiexi Chen. 2022. "Deep Learning in Medical Hyperspectral Images: A Review" Sensors 22, no. 24: 9790. https://doi.org/10.3390/s22249790

APA Style

Cui, R., Yu, H., Xu, T., Xing, X., Cao, X., Yan, K., & Chen, J. (2022). Deep Learning in Medical Hyperspectral Images: A Review. Sensors, 22(24), 9790. https://doi.org/10.3390/s22249790

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop