Next Article in Journal
Examination of Joint Effusion Magnetic Resonance Imaging of Patients with Temporomandibular Disorders with Disc Displacement
Previous Article in Journal
A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Focus Measures for Hyperspectral Imaging Microscopy Using Principal Component Analysis

National Metrology Institute, The Scientific and Technological Research Council of Türkiye (TÜBİTAK-UME, Ulusal Metroloji Enstitüsü), Kocaeli 41470, Türkiye
J. Imaging 2024, 10(10), 240; https://doi.org/10.3390/jimaging10100240
Submission received: 31 July 2024 / Revised: 21 September 2024 / Accepted: 23 September 2024 / Published: 26 September 2024
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)

Abstract

:
An automatic focusing system is a crucial component of automated microscopes, adjusting the lens-to-object distance to find the optimal focus by maximizing the focus measure (FM) value. This study develops reliable autofocus methods for hyperspectral imaging microscope systems, essential for extracting accurate chemical and spatial information from hyperspectral datacubes. Since FMs are domain- and application-specific, commonly, their performance is evaluated using verified focus positions. For example, in optical microscopy, the sharpness/contrast of visual peculiarities of a sample under testing typically guides as an anchor to determine the best focus position, but this approach is challenging in hyperspectral imaging systems (HSISs), where instant two-dimensional hyperspectral images do not always possess human-comprehensible visual information. To address this, a principal component analysis (PCA) was used to define the optimal (“ideal”) optical focus position in HSIS, providing a benchmark for assessing 22 FMs commonly used in other imaging fields. Evaluations utilized hyperspectral images from visible (400–1100 nm) and near-infrared (900–1700 nm) bands across four different HSIS setups with varying magnifications. Results indicate that gradient-based FMs are the fastest and most reliable operators in this context.

1. Introduction

A hyperspectral imaging technology that was initially developed for remote sensing and military applications has thoroughly spread over vast application areas and disciplines. For instance, it has been equally successful when applied to distinguishing tumors from normal tissue [1,2], as well as in astronomy for investigations into the distribution of mineralogy on a global scale on the Martian surface [3]. This spacious progress became possible due to many mutually supporting factors, such as advances in multivariate data analysis methods and improvements in high-output optics and solid-state imaging sensors.
Extensive research in the field of optical design resulted in imaging spectrographs with a significantly improved throughput, which in conjugation with highly sensitive digital cameras conceive a new paradigm in optical microscopy—high-magnification, high-resolution hyperspectral imaging microscopy. Even the first hyperspectral imaging microscopes (HSIMs), which covered only the visual spectral region, demonstrated many advantages over the traditional color imaging microscopes. The advantages were mainly due to the capability of differentiating among similarly perceived objects [4]. Additionally, recent achievements in uncooled high-resolution InGaAs focal plane arrays (FPAs) have allowed for integrating the benefits of vibrational spectroscopy with high-magnification short-wavelength infrared (SWIR) microscopy. As a result, it enables us to sense the spatial distribution of various chemical components, whose spectral signatures originate from molecular overtones and combination vibrations in the SWIR range (700–2500 nm). The ability of HSIMs to recover the chemical constituent maps of investigated samples has significantly extended their application areas and made them an important tool in many fields for the estimation and prediction of the comprising components [5,6,7,8]. More importantly, the integration of chemometrics with the near-infrared (NIR) hyperspectral imaging has transformed HISM from a tool that was used only for the detection and visualization of physical, chemical, and biological quality attributes’ distribution to a measurement instrument that allowed for quantitatively assessing these attributes [9]. Recently, the introduction of deep learning methods in hyperspectral content analyses has significantly increased the ability of hyperspectral imaging to classify heterogeneous samples using their spectrally and spatially resolved content. This transformation has turned HSIM into a major microscopy technique that is widely adopted not only by chemometricians and analysts but also in diverse multidisciplinary fields [10]. An explosive growth of applications is observed in safety and quality control [11] and the inspection of anomalies [12], in food commodities and food adulteration [13,14], and in cultivated land quality [15]. The spectral sensing techniques are commonly used for estimation spectral signatures of various substances including the human coronavirus [16] and land cover [17].
The performance of multivariate statistical algorithms in the extraction of reliable chemical information from hyperspectral images strongly depends on the quality of the band images that constitute analyzed hypercubes. Focusing affects image quality and is one of the fundamental adjustment operations available in optical imaging instruments. A well-focused image has the highest achievable sharpness, has large contrast, and contains a considerable amount of details (within the field of view (FOV)) by means of a wide range of intensity values [18].
The focusing can be performed manually or automatically. The latter process is called autofocusing. Unfortunately, a universal autofocusing algorithm (AFA) does not exist. In-stead, the AFA selection is made for each specific imaging domain. The performance of various AFAs across a range of imaging applications—including optical microscopy [19], automated microscopy for biological samples [19,20,21,22,23], optical systems [24,25,26,27,28], digital holographic microscopy [29], thermal imaging [30,31], and scanning electron microscopy [32]—has been extensively compared and studied. However, to the best of our knowledge, there is no comprehensive study of the performance of AFAs within the hyperspectral imaging domain.
The aim of the current work is twofold. The first objective is to establish a benchmark for the autofocusing of HSIS by evaluating twenty of the most widely used FMs, carefully selected based on their performance across various imaging domains. The other aim is to demonstrate that PCA, employed as a reference point to determine the optimal focus position, outperforms human-comprehensible visual information, while instantaneous two-dimensional hyperspectral images do not always possess such clarity.
The remainder of this paper is organized as follows: Section 2 describes the developed test facility including measurement methods and main metrological characteristics. Thereafter, the results of the experimental measurements are discussed in Section 3. Finally, a discussion and conclusions are presented in Section 4 and Section 5, respectively.

2. Materials and Methods

2.1. Autofocus Methods

In an image acquisition process, the objective of an AFA is to automatically adjust the lens to the correct position (in focus), ensuring that light from a single point on the object plane converges to a single point on the image plane. Consequently, the resulting image on the focal plane exhibits higher sharpness, with its FM function reaching a global extremum, typically a maximum.
In automated microscopy, a series of N images, each corresponding to the same FOV but captured at different focal positions around the focal plane, are acquired. This sequence is obtained by moving the sample along the optical axis of the microscope system, typically using a servo-controlled translation stage. The images in the stack are indexed based on their focal positions along the optical axis. Given that the images encompass the same FOV, the sole distinction lies in their sharpness. According to this setup, the image with the highest sharpness is anticipated to represent the best focus [18]. Thus, for each image in the stack, a scalar value of the FM is computed and stored in a vector. Subsequently, by seeking a global maximum in the vector, the optimal focal position is determined.
The primary figure of merit for any FM is its focus curve, which represents the relationship between image sharpness and focal position. Ideally, an effective FM should exhibit a smooth focus curve featuring a single, clearly defined peak corresponding to the precise focus position (Figure 1a). Additionally, the curve should symmetrically and uniformly decrease (or increase) on either side of the focus position. However, in practical scenarios, focus curves may significantly deteriorate and deviate from the ideal expectations. The robustness of the focus curve is not solely contingent on the specific FM employed; rather, factors such as the noise level, image content, and characteristics of the imaging system can contribute to the degradation of the curve.
A brief description of the 22 FMs utilized in this study is listed in the next subsection.

2.2. The Focus Measures

In the present study, we evaluate a collection of twenty-two FMs within the context of autofocusing in hyperspectral imaging, which possesses its own peculiarities. Below is the brief description of each FM.
i. Gradient-based functions.
The gradient-based focus measures rely on intensity variations among neighboring pixels, exploiting the principle that focused images exhibit higher-intensity differences. These functions calculate the local gradient of each individual image in the stack and then sum them:
F M g r a d = i , j I i , j 2 , i f   I i , j > θ
where denotes the Euclidean norm. To enhance sensitivity in noisy images, an accumulation of gradient values above a threshold value θ is commonly performed. The most popular gradient-based focus measure introduced by Tenenbaum is considered the benchmark in the field [33,34]. It convolves an image with two two-dimensional filter kernels O p   ( p = 1   a n d   2 ) and then sums the square of the gradient vector components:
F M g r a d = i , j I O 1 i , j 2 + ( I O 2 ) i , j 2 ,
where “∗” indicates the two-dimensional discrete convolution operator, and the kernels are defined as
O 1 = 1 0 1 a 0 a 1 0 1 and   O 2 = O 1 T .
In this study, we employ two pairs of kernels, where the specific parameter value a determines the FM variant. For a = 2 , the FM is termed as Tenengrad FM on Sobel operators (referred to as TEN1), and for a = 1 , it is denoted as Tenengrad FM on Prewitt operators (TEN2). Both FMs are based on standard edge detection masks with different kernels. TEN2 is used to detect horizontal and vertical edges, while TEN1 emphasizes central pixels and is less sensitive to noise.
Additionally, three FMs based on the second derivatives, such as
F M g r a d = i , j 2 I ( i , j ) 2
were included in the scope of the present work.
(i)
Energy of Laplacian (EOL): This FM retrieves the sharpness value by analyzing high spatial frequencies associated with image borders and is computed by convolving an image with the convolution mask given by
E O L = i , j ( ( I L x ) i , j + I L y i , j ) 2 ,
where L x = 1 2 1 and L y = L x T are the Laplacian operators.
(ii)
Sum-Modified Laplacian (SML): Proposed by Nayer and Nakagawa [34], this function serves as an alternative definition of EOL. It is derived from the observation that horizontal and vertical directions can have opposite signs, canceling each other out:
S M L = i , j I L x i , j + I L y i , j .
(iii)
Diagonal Laplacian (DLF): This focus measure, proposed by [35], extends the SML with diagonal terms, thus considering variations in both directions, i.e., along the spectral and spatial directions in hyperspectral images. It was subjected to evaluations in this work:
D L F = I L x i , j + I L y i , j + I L d 1 i , j + I L d 2 i , j 2 ,
where
L d 1 = 1 2 0 0 1 0 2 0 1 0 0 and   L d 2 = 1 2 1 0 0 0 2 0 0 0 1 .
Next, three representatives of another family of gradient-based FMs, which accumulate image derivatives of the generalized form
F M d e r = i , j I i , j I ( j + k , j + l ) p ,
under the condition I i , j I ( i + k , j + l ) > θ , were included in this work:
(iv)
F M d e r with parameters l = 0 , k = 1 , p = 1 , and θ > 0 is named the Thresholded Absolute Gradient, and referred to as A B G in this paper. The ABG is based on summing the first derivative of the image in the horizontal dimension, as a focused image has more gradients than a defocused image.
(v)
The case with parameters ( l = 0 , k = 1 , p = 2 ) is named the Squared Absolute ( S A G ). The S A G is distinguished from the ABG y summing the square of the first derivative of the image in the horizontal dimension, to increase the contribution of larger gradients.
(vi)
The case with parameters ( l = 0 , k = 2 , p = 2 ) is named the Brenner function (BRE) [32]. This focus measure (FM) is based on the second difference of the image intensity in the horizontal direction, which corresponds to the spatial axis of hyperspectral images. Some works also report applying it in the vertical direction.
The SAG and BRE can be used without applying a threshold; however, in the current work, we applied a threshold value based on the noise level of the images.
Finally, we complete the list of gradient-based FMs with the following two functions:
(vii)
Energy of Image Gradient (EIG): This measure accumulates the sum of squared directional gradients, given by the following [36]:
E I G = i , j I x i , j + 1 + I y i , j 2 ,
where I x i , j = I i + 1 , j I i , j , and I y i , j = I i , j + 1 I i , j .
(viii)
Boddeke’s Algorithm (BOD): This function relies on computing a gradient magnitude value using a one-dimensional convolution mask, B x = 1   0   1 , specifically along a single direction [27]. In the evaluations of hyperspectral image stacks, this direction corresponds to the spatial information dimension.
B O D = i , j I i , j 2 ,
where I i , j = I i + 1 , j I i 1 , j .
ii. Statistically based functions.
These functions form another family of FMs, involving calculations of image variance, entropy, and correlation. Well-focused images typically exhibit higher variance, information content, and sharper autocorrelation peaks compared to blurred ones [22].
(i)
The Normalized Variance of an Image (NVR) is based on summing the variance of an image’s gray level with respect to its mean intensity and is defined as
N V R = 1 μ W H W H I i , j μ 2 ,
and here μ = 1 W H W H I ( i , j )   represents the mean intensity; W and H are the image width and height in pixels.
(ii)
The Autocorrelation Function (ACF), also known as Vollah’s F4 function, is more robust to image noise and computes the image’s autocorrelation [23]:
A C F = W H I i , j I i + 1 , j W H I i , j I ( i + 2 , j ) .
(iii)
The Standard Deviation-based Autocorrelation Function (Vollah’s F5 function) is utilized, which suppresses high frequencies (VOL5) [37]:
V O L 5 = W H I i , j I i + 1 , j H · W · μ 2 .
Additionally, three image histogram-based functions were included in the evaluation list. These functions offer different insights into the distribution and characteristics of the image histogram, contributing to the assessment of image sharpness and focus.
(iv)
Entropy Function (ENT): A focused image has higher entropy (i.e., more information) than a defocused image, and therefore, the range of the image histogram can be used as an FM. The ENT FM uses the image histogram and is defined as
E N T = l p l · log 2 p l ,
where p l = h ( l ) / H W p l is the probability for each intensity level l in the histogram; h ( l ) represents the number of pixels with intensity l .
(v)
Variance of the Log-histogram (LOG): This FM is based on the assumption that high-intensity pixels contribute to the upper part of the histogram and addresses the image’s brightness level through a logarithmic transformation of the histogram [23].
L O G = l ( l E l o g l ) 2 log ( p l ) ,
where E l o g l = l ( l · log ( p l ) .
(vi)
Weighted Histogram (WHS): This FM is based on a weighted image histogram without introducing a constant threshold, taking into account that a focused image has more bright pixels than a defocused image [23]. The values of power and roots are determined empirically. Here, l and h ( l ) represent the gray level and the number of pixels at each gray level, respectively.
W H S = l ( 10 15 · l 5 · h ( l ) 5 ) ,
iii. Frequency domain focus functions.
They transform the image from the spatial domain to the frequency domain, where more blurred images have fewer high-frequency components. In defocused images, the interaction between adjacent pixels occurs at low frequencies, and as the image becomes more focused, the number of high-frequency components in the frequency domain increases, which serves as the basis for image clarity.
Three widely used representatives of these group FMs were evaluated in the current work [38].
(i)
The first is the Fourier transform (FFT), which is given by
F T = u , v u 2 + v 2 · G u , v ,
and here u and v are the coordinates in the frequency domain, and G is the Fourier transform of the image, where the zero-frequency component is shifted to the center of the Fourier spectrum, i.e., the image array is first zero-padded before performing the FFT. From the real and imaginary parts of each transformed array, the value of the FM is calculated.
The two Discrete Cosine Transform (DCT)-based FMs below require only real number calculations, which improves the speed of the algorithm.
(ii)
The second transform-based FM is named as the Discrete Cosine Transform (DCT) was calculated using the formula
D C T = C u · C v · m , n I m , n · cos π 2 m + 1 u 2 K · cos 2 n + 1 v 2 K ,
here C u = 1 / K if u = 0 , or C v = 1 / K if v = 0 , and C u = C v = 2 / K in other cases.
(iii)
The third function is the Midfrequency-DCT (MF-DCT) [29]. This FM is the 8 × 8 modification of DCT, and like DCT is computed for every pixel according to its neighborhood [38]:
M F D C T = m , n I m , n O M D C T ,
where
O M D C T = 1 1 1 1    1 1 1 1 1 1    1    1 1 1     1     1
Finally, the next three wavelet-based FMs were directly applied as described in [20]. Briefly, the image is decomposed into four sub-images using a discrete wavelet transform. Then, these FMs’ functions utilize sub-images ( l l , l h , h l , h h ) of an image subjected to the wavelet filter Daubechies-06 of both types: high-pass ( h ) and low-pass ( l ) types:
(iv)
(WL1) Wavelet Algorithm: This FM sums the absolute values in ( h l , l h , h h ) sub-images:
W L 1 = W H W h l ( i , j ) + W l h ( i , j ) + W h h ( i , j ) ;
(v)
(WL2) Wavelet Algorithm: This FM uses the variance of wavelet coefficients and sums them in ( h l , l h , h h ) sub-images. Here, the mean values μ in each region ( h l , l h , h h ) are computed from absolute values.
  W L 2 = 1 H · W W H W h l ( i , j ) μ h l 2 + W l h ( i , j ) μ l h 2 + W h h ( i , j ) μ h h 2 ;
(vi)
(WL3) Wavelet Algorithm: The difference between WL2 and WL3 is that the mean values μ are computed without absolute values:
W L 3 = 1 H · W W H W h l i , j μ h l 2 + W l h i , j μ l h 2 + W h h i , j μ h h 2 .

2.3. PCA

PCA is a strong mathematical tool that is commonly used to transform a multivariate dataset with intercorrelated variables into a reduced-dimensionality set of uncorrelated variables, known as principal components. These principal components are derived through linear combinations of the original variables. The total variance of the principal components is equivalent to the total variance of the original variables, where each component is also associated with a rank indicating the percentage of the data variance it possesses. Depending on the application, various strategies can be followed to select the number of the components [39,40,41,42,43,44].
In this work, we applied PCA to both out-of-focus and focused images, and identified a relationship between the ranking of principal components and image sharpness. The concept of PCA can be found in various sources, such as [39,42]. Here, we briefly describe its implementation within the MATLAB environment [45], which was applied in the current work. Initially, for each image I s (s denotes the step number within a single autofocus search cycle), a zero-mean image I m is calculated as I m = ( I s μ ) / σ , where μ and σ images represent the mean and standard deviation, respectively. Then, the covariance matrix is computed as C = ( 1 / W H ) I m T I m , where I m T is the transpose of I m , and W and H are the image dimensions. Following this, the eigenvalues Λ (i.e., principal components) and corresponding eigenvectors V of the correlation matrix C are calculated using the equation C V Λ V = 0 , to extract the principal components. Once the images are projected into the PCA domain, the principal components are sorted in decreasing order, with the first principal component corresponding to the largest variance. Since out-of-focus images are typically blurred and exhibit lower variance (i.e., carry less signal) compared to focused images, it is expected that images captured closer to the focal plane will have a higher signal-to-noise ratio, thus accumulating more variance in fewer principal components. We exploit this PCA behavior and calculate a focus curve, where the y axis corresponds to the number of principal components that are required to preserve a certain fixed amount (e.g., 98–99%) of data variance and the × axis, as per usual, corresponds to the focal position. We have observed that the global minima of such a curve correspond well to the best focus position.

2.4. Ranking Criteria

The criteria for assessing the performance of the FM may vary depending on imaging systems and specific applications. Various criteria are available for ranking the FMs [20,28,32,46], etc. However, two criteria, namely accuracy and unimodality, are the most important and common for all types of imaging systems.
Accuracy ( d a c c ): This measure is used to assess how the estimated best focus position, obtained through FMs, deviates from the optical (“ideal”) focus position. Typically, in the literature, the actual focus position is determined by laboratory staff or proficient microscope technicians who visually search for the best focus position, characterized by the image with the highest contrast, sharpest edges, and highest intensity values.
However, in hyperspectral imaging, accurately determining the focus position visually is more challenging compared to conventional imaging systems. Therefore, in this study, the “ideal” focus position was determined using PCA. The focus accuracy is evaluated by the in-focus position error, defined as the absolute difference between the peak positions determined by the PCA analysis and those provided by the given FM. The optimal value is 0.
Unimodality ( d u n i ): An ideal focus curve should display a single prominent peak corresponding to the optimal focal position. However, in practical scenarios, focus curves may contain several local maxima or minima, which can hinder the accurate determination of the in-focus position. The presence of false extremes is greatly influenced by image noise. The optimal value is 0.
Furthermore, to assess the performance of focus measures in the context of hyperspectral imaging, we have employed three additional criteria.
Width at 50% of Maximum ( d 50 ): This measure, also referred to as Full Width at Half Maximum (FWHM), is utilized to characterize the convergence of the focus curve and the sharpness of its peak. It is defined as the curve width at half of its maximum height.
Width at 90% of Maximum ( d 90 ): In the dynamic line scanning mode of hyperspectral imagers, an autofocus measure should demonstrate sensitivity to changes in the distance between the imager and the sample surface. This sensitivity is vital for UAV-based hyperspectral imaging, where the distance fluctuates due to the topography of the land surface and environmental factors such as wind. Therefore, we introduce the width at 90% of the maximum to improve the performance of the FM in the scanning mode. A smaller value of this measure enables an easier detection of defocusing and speeds up the attainment of the global extremum.
Smoothness ( d s m o ): Searching for the focus position along a smooth curve is relatively easier compared to a jagged one. While numerous smoothness indices exist [47], we opt for the sum of the absolute values of the first derivative of the curve. A lower smoothness index indicates a smoother curve. Throughout the assessments, we employ a normalized smoothness index.

2.5. HSIMs

Throughout the course of the current work, 4 (four) distinct experimental setups (Table 1) employing hyperspectral imaging have been designed and implemented. All of these setups have incorporated a transmission-type pushbroom (line scanning) spectrometer from Specim (Finland), featuring a dispersive element arranged in a prism–grating–prism optical configuration (Figure 1b, inset). Two variations of spectrometers were utilized: one operating in the visible-near-infrared (Vis-NIR) spectral region from 400 nm to 1000 nm and the other in the NIR spectral region from 900 nm to 1700 nm.
Two HSIS, operating in the Vis-NIR and NIR spectral bands, utilize high-magnification microscope objectives from Hirox, Japan (OL-700II and OL-140I, respectively), along with a variable (1×–10×) zoom lens. These configurations are referred to as HSIM1 and HSIM2 throughout this paper. The HSIM setups are primarily used in studies of soil chemical components. A custom-built two-axis computer-controlled translation platform was employed to scan the sample with precise and adjustable scanning speed. Additionally, a step motor-controlled translation stage with a minimum step size of 0.6 μm was utilized to adjust the microscope–sample distance, thereby enabling focal position adjustments (Figure 1b). In the initial setup, illumination was achieved using a current-stabilized tungsten–halogen lamp (up to 250 W), which was focused onto the coaxial entrance of the microscope, ensuring an “ideal” illumination and diffuse detection configuration as described in [8]. To mitigate second-order spectra, a half-colored long-pass cut-off filter was positioned at the output of the spectrometer.
The remaining two setups, referred to as HSIS1 and HSIS2 in this paper, incorporate a single objective lens positioned in front of the spectrometer. Specifically, a 55 mm focal length partially telecentric imaging lens (1.0×–0.4×, FL 55 mm, from Edmund Optics) is utilized (Figure 2a). These setups employ a 45-degree incident and 90-degree detection (d90R45) configuration for illumination [8]. Two quartz–halogen lamps (150 W each) are arranged on the left and right sides of the objective lens, providing uniform irradiation over a sample surface area of approximately 50 mm2, which significantly exceeds the FOV of the corresponding HSIS. These HSIS setups are primarily employed in studies on soil moisture analyses.

2.6. Instrument Calibration

To spectrally calibrate and determine the spectral resolution of the HSIS, several types of fluorescence calibration lamps (particularly neon, mercury, and xenon lamps from Pen-Ray, Spectral Calibration Lamps) with reference spectral lines listed in the NIST Atomic Spectra Database were employed. Additionally, a series of lasers with known wavelengths at 0.47, 0.53, 1.15, 1.35, and 1.55 μm were utilized. The beams from these lasers were diffusely scattered inside an integrating sphere and subsequently used for calibration purposes. The spectral resolutions of each system are depicted in Table 1.
The alignment of the sensor array and corresponding spectrometers was conducted using the standard USAF 1951 test chart, following the procedures described in [48]. Alignment validation was performed utilizing a calibrated gauge—a standard glass ruler from Mitutoya (Japan) that has 100 μm separations between adjacent grid marks (this gauge was also used for the determination of the spatial resolution of the setups)—and a white paper with different thin parallel lines printed 1–3 mm apart. In order to minimize the contribution of the optical distortion of the objective lenses utilized in the studies, in alignment validation tests, we used a telecentric lens (0.5× Gold Series, F/25, Edmund Optics) for the visible- and NIR-range HSIMs.

2.7. Samples and Sample Preparation

To ensure robust comparisons and to draw general conclusions for selecting a particular FM, it is essential to choose a statistically sufficient number of samples that convey a variety of information and exhibit a reasonable variation in attribute features. In order to meet this requirement, different samples were chosen for the vis-NIR- and NIR-range experiments. Specifically, in the visible range, nine different samples of five types were used (Figure 1c): (i) diffuse surface (pieces of paper, both white and colored), (ii) specularly and diffusely mixed reflectance surface (colored plastic surfaces), (iii) specularly weighted reflectance surface (an aluminum plate), (iv) heterogeneous sample (a mix of ground coffee and pepper powders), and (v) wood. In the case of NIR experiments, the following samples were used: (i) wood, (ii) external and internal skin surfaces of banana, (iii) slices of apple (freshly cut and moisture wiped by paper towels), (iv) slice of cheese, (v) textile doused in biodiesel, (vi) coffee powder, and (vii) fresh bay leaves. The leaf, cheese, and fruit slice samples were placed on a stainless steel background and covered with a quartz glass plate of a 0.8 mm thickness. This was carried out to prevent the evaporation of moisture due to lamp illumination.
In addition to the aforementioned items, the FMs were examined using bare soil samples. In total, 11 different types of soil, with varying grain sizes and clay and sand content, were selected for initial experiments. Certain amounts of samples were sieved with particular sizes of 0.5 mm and 2 mm. These samples were placed into a Petri dish with a thickness of about 1 cm. These dried and sieved samples were used in current experiments. Additionally, special attention was paid to prepare an isoplanatic surface on all samples. All samples were prepared at room temperature, 23 ± 0.5 °C.

2.8. Image Acquisition

Hyperspectral images were acquired in the diffuse reflectance mode. A series of hyperspectral image stacks was captured for each experimental sample. Motorized movements in the Z-direction of the systems were controlled using homemade software that allowed the synchronization of the image acquisition process with lateral scanning of the distance between the imager and object plane. Images within each series of image stacks correspond to the same FOV and optical magnifications captured at finely spaced intervals along the optical axis of the corresponding hyperspectral imaging system. For each zoom value of the objective lens used, the corresponding depth of field (DOF, Z ) was calculated according to the expression Z = λ / 4 · n 1 1 N A / n 2 , where λ is the effective wavelength of illumination, n is the index of the refraction of air, and NA is the numerical aperture of the objective lens at a given optical magnification. Then, corresponding lateral scanning step sizes were calculated, taking into account the Nyquist criterion. In all experiments, image stacks containing from 150 to 250 images (depending on the optical zoom value) for HSIMs, and 80 to 100 images for HSISs, were captured and saved in a raw format for further processing. Each image stack is also coupled with a corresponding dark image that is captured before the series acquisition.

3. Results

3.1. Validation Phase: Video Images

Prior to examination within the scope of the hyperspectral imaging domain, the performance of the FM functions was assessed using conventional video image stacks. To preserve the optical configurations maximally, the spectrographs were removed from the Vis-NIR and NIR hyperspectral systems, transforming them into conventional video imagers. Various samples were used to record through-focus video image stacks at different optical magnifications. PCA was applied to determine the best focus position in these stacks. Figure 3a presents the result of this evaluation, illustrating the global minimum of the PCA, which was used as the reference for evaluating the performance of the FMs. Subsequently, the FMs were applied to determine the focus position in these stacks. Figure 3b–d show examples of these evaluations, illustrating typical focus curves of the FMs, which are normalized by their maximum values and categorized by families. Likewise, Figure 4a–d depict these results for the NIR range, obtained employing the same soil sample. The results indicate that the functions exhibit a reliable focus curve with a distinct extreme value and monotonic behavior, where different FM values correspond to different levels of defocusing. This behavior aligns well with state-of-the-art results [19,20,21,22].
Moreover, the best focus position in these video stacks was determined by proficient microscope operators and compared with that determined by PCA evaluations. An exact coincidence of the focus positions obtained using PCA and determined by an operator was observed for all video stacks. Based on these results, the focus position determined by PCA was used as the reference for the accuracy estimation of the FM in the hyperspectral imaging domain.

3.2. The Behavior of the FMs in Hyperspectral Images

In the evaluations, a dataset comprising 92 hyperspectral image stacks was utilized for performance assessments. This dataset encompasses a total of 72 hyperspectral image stacks of soil samples, with 36 image stacks equally distributed across both the Vis-NIR and NIR bands. Additionally, the dataset incorporates 14 image stacks within the Vis-NIR range and another 16 within the NIR range, corresponding to the objects specified in Section 2.7.

3.2.1. The Vis-NIR Range

Figure 2b shows an example of Vis-NIR hyperspectral images of the same soil recorded at the well-focused and defocused (corresponding to 50% (top) and 90% (middle) positions of the focus curve) focal positions. Figure 5a illustrates an example of an image profile along the spatial axis for a well-focused image and defocused images with focus measures at 50% and 90% of the maximum, respectively. Figure 6a–d display a series of focus curves for each family of FM evaluated for the same stack using raw Vis-NIR hyperspectral images (HSIS1 at 0.5× optical magnification). Practically similar behaviors of focus curves were observed for all hyperspectral image stacks in the dataset recorded using the soil samples and HSIS1.
However, in experiments with the HSIM1, an increase in optical magnification resulted in the deterioration of focus curve behavior in terms of the width and noise level, as it is shown in Figure 7a–d (HSIM1 at 100× optical magnification). Nevertheless, the position of the best focus remained mostly unchanged, which was most prominent for gradient-based and frequency-based FMs. Except for the SAG and EOL, due to their side lobes, all gradient-based FMs demonstrated behavior close to ideal curves. In some cases, with increased optical magnification, a second false peak appeared for frequency-based FMs. However, the width of the focus curve, especially for wavelet-based FMs, reacted only slightly to the optical magnification. Except for the ACF, all FMs in the statistics-based FM demonstrated degradation with the increase in the optical magnification of the imaging system. It is worth noting that in the experiments with the HSIS1, the behavior of focus curves was more consistent with the variation in the optical magnification from 0.5× to 1×.

3.2.2. The NIR Range

Finally, we assessed the performance of the FMs on the NIR-range hyperspectral im-age stacks. Figure 2c shows an example of NIR hyperspectral images (obtained using the HSIS2 and a soil sample) recorded at the well-focused (the image in the bottom of Figure 2c) and defocused (corresponding to 50% (top) and 90% (middle) positions of the focus curve) focal positions. Figure 5b illustrates an example of an image profile along the spectral axis for a well-focused image and defocused images with focus measures at 50% and 90% of the maximum, respectively. A series of focus curves of each family of FM evaluated for the same stack using raw NIR hyperspectral images are shown in Figure 8a–d. Analogous to the validation and the Vis-NIR case, here, due to the lack of space, we solely present the results of one hyperspectral image stack, corresponding to the same soil sample. In contrast to the results observed in the video mode datasets and hyperspectral images in the vis-NIR range, where practically all FMs exhibited a characteristic behavior consistent with focus curves featuring a global extremum, several FMs (the Mid-DCT and most statistics-based FMs, except for the ACF) in the NIR-range hyperspectral image series displayed no extremum whatsoever. Similarly, the effectiveness of PCA also fluctuated in these stacks. Especially at high magnifications (around 50×) of the HISM2, the fluctuations became more prominent. While the shape of the PCA curve improves with an increase in the number of involved components, the general noisy behavior of the focus curves of most FMs and PCA increases the overall evaluation uncertainty. To address this issue, we observed that averaging images up to 10 at each focal position did not notably enhance the quality of the FMs. Subsequently, we implemented a sub-sampling (i.e., a selection of a region of interest, i.e., ROI) option for each individual image within the stack before evaluating the FMs. This approach significantly enhanced the performance of the FMs and PCA focus curves. Therefore, in evaluations involving noisy NIR images, the assessments were conducted by applying ROI to the images, as it is described in the next subsection.

3.3. Robustness of the FMs

In this study, the speed of the autofocusing process will not be examined, as will be discussed later. However, extensive investigation was conducted to understand statistically the impact of sub-sampling (i.e., ROI) of the images in a stack, which enhances the speed of the autofocusing process. While the acceleration of FM evaluations through ROI selection is evident due to the reduction in the number of pixels in proportion to the ROI area compared to the entire image area, the extent to which the performance of FMs will be affected by sub-scaling remains unclear. Various windows sizes and locations of ROI selection were applied to image stacks prior to the assessment of FM performance.
Finally, we assessed the robustness of the FMs against artificially generated noise and nonhomogeneous illumination. For the first purpose, we added Poisson noise (using the Matlab function “poissrnd” with parameters of 10, 25, 50, and 100) to each original image in an image stack obtained in the visible range and conducted focus evaluations. For the second purpose, a luminance gradient, represented as a quadratic polynomial function with maximum intensity values of 0.8, 0.9, and 1, and a minimum value of 0, was applied as a gray-level image, multiplied with the original images [23]. Figure 9a–d demonstrate an example of the focus curve behavior in noisy image stacks. Through the analysis of more than 20 hyperspectral image stacks, ABG, WL3, and ACF were found to be more stable in focusing on noisy images. The results of these simulations were utilized in subsequent ranking evaluations of the FMs, as will be described in the next section.

4. Discussion

4.1. Results of Ranking Evaluations

The calculation of performance indices is carried out using normalized focus curves. For each normalized focus curve, the distance parameter, denoted as D, is determined by calculating the difference between the observed d a c c   ,   d u n i   ,   d 50   ,   d 90   , d s m o     value and the ideal value 0 ,   0 ,   0 ,   0 ,   0   for the corresponding criteria. A smaller distance value suggests better performance of that FM when ranked based on that parameter. To establish an overall ranking, the Euclidean length of each focus curve is calculated using the following formula:
D = d a c c 2 + d u n i 2 + d 50 2 + d 90 2 + d s m o 2
where each distance is normalized to its maximum value. This ensures that in the overall scoring, each distance carries equal weight. Consequently, the best FM will have the lowest overall score.

4.1.1. Ranking Evaluations for the Conventional Images

First, we conducted an evaluation of conventional monochrome video image stacks. Table 2 summarizes the ranking of the FMs based on individual criterion distances and overall scores, calculated from the average of 40 image stacks of samples as described in Section 3.3 and soil samples. TEN1 and BOD were identified as providing nearly identical top-ranking performance among the gradient-based FMs. While BRE and ABG FMs exhibited the highest accuracy and behavior similar to TEN1, BOD, and TEN2 in terms of curve width, a false maximum was observed for these FMs, particularly at the highest magnification. However, at low magnifications ranging from 0.5× to 1×, no false maximum was observed, and they demonstrated overall performance comparable to the leading FMs. WL3 exhibited the best overall performance among all evaluated FMs. Unlike other frequency-based FMs, no false maximum was observed for WL3 at higher magnifications. Among the statistics-based FMs, NVR was identified as the superior algorithm. Similarly, Table 3 presents the ranking of the FMs based on conventional video images (a dataset containing 35 image stacks) captured with NIR imaging systems. Among their respective FM families, TEN1, WL3, and VOL5 emerged as the top-performing algorithms, with TEN1 demonstrating the best performance among all FMs.

4.1.2. Ranking Evaluations for the Hyperspectral Images

Table 4 summarizes the individual performance and ranks within each respective family of all evaluated FMs in the context of the vis-NIR hyperspectral domain. As seen from Table 4, in this domain, the first six gradient-based FMs demonstrate very close performance to each other, with the BRE function resulting in stable performance across all investigated samples, including noisy images. This is followed by the best performance of WL3 and ACF algorithms in the frequency and statistical groups, respectively. Generally, the three wavelet-based FMs demonstrate good performance in terms of accuracy and curve width, except for unimodality. Particularly, with the increase in optical magnification, a side lobe appears around the main peak of WL1 and WL2 focus curves.
While statistics-based algorithms are recognized for their strong performance with conventional images [20,22,28,32], our experiments have shown that they underperform with hyperspectral images. For the vis-NIR domain images, AFC was found to provide the best overall performance, comparable to the leaders of the other two families of FMs. Although the accuracy of the ENT and VOL5 functions was similar to that of AFC, they underperformed in other criteria.
Finally, Table 5 summarizes the individual performance and ranks within each respective family of all evaluated FMs in the context of the NIR hyperspectral domain. By comparing with Table 4, it can be seen that in the gradient- and statistical-based groups, the ranking of FMs was changed, with no change in the frequency-domain FMs. This shows that these algorithms, especially wavelet-based ones, are the most robust to the sub-sampling and noisy images. BOD and ABG demonstrated high performance in locating the best focus position very near to that determined by PCA; however, the width of the ABG focus curve deteriorates with noise in the image. Therefore, BOD was found to demonstrate the best performance in the NIR spectral band hyperspectral images. This is followed by WL3 in the frequency domain and NVR in the statistical-based FM family. In the latter group, ENT demonstrates very similar overall performance to NVR.

5. Conclusions

Autofocusing is one of the fundamental techniques influencing the efficiency of automated digital imaging systems. An image-based autofocusing method, based on image sharpness information (i.e., FM) to determine the best focus position, predominates in the field. Since FMs are domain-specific, their performance is evaluated using verified focus positions. In this study, we employed PCA to define the optimal focus position in hyperspectral imaging stacks, where images lack clear visual information. Then, we conducted a systematic analysis of the performance of twenty widely utilized FMs for autofocusing in hyperspectral imaging systems. We acquired a diverse number of hyperspectral image stacks, encompassing high (25×–100×) and low (0.1×–0.5×) optical magnifications, utilizing a variety of four pushbroom imaging spectrometer-based systems operating within the Visible–NIR (400–1000 nm) and NIR (900–1700 nm) spectral ranges. The performance of the various focus measures was assessed through experiments conducted under diverse conditions, including variations in the optical magnification, illumination, spectral band, image noise level, nonuniform illumination, and window size.
Furthermore, a ranking methodology, inherent to hyperspectral imaging systems utilized in remote sensing, was proposed. Based on these criteria, the overall performance of the focus measures was evaluated both in a group and individually. Experimental results demonstrate that gradient-based focus measures are the most rapid and reliable operators in the field, while wavelet-based algorithms are the most robust to sub-sampling and noise in hyperspectral images.

Funding

This research was funded by the 21GRD08 SoMMet project. The SoMMet project has received funding from the European Partnership on Metrology, co-financed from the European Union’s Horizon Europe Research and Innovation Programme and by the Participating States (funder name: European Partnership on Metrology; funder ID: 10.13039/100019599; grant no. 21GRD08 SoMMet; https://www.sommet-project.eu/ (accessed on 25 September 2020).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The author thanks Alisher Kholmatov for his assistance in discussing the technical aspects of the paper’s content and results. The author also extends thanks to Omer Faruk Kadi for his help with the software. The author is grateful to anonymous referees for useful comments and constructive suggestions.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Jong, L.; Kruif, N.; Geldof, F.; Veluponnar, D.; Sanders, J.; Peeters, M.-V.; Duijnhoven, F.; Sterenborg, H.; Dashtbozorg, B.; Ruers, T. Discriminating healthy from tumor tissue in breast lumpectomy specimens using deep learning-based hyperspectral imaging. Biomed. Opt. Express 2022, 13, 2581–2604. [Google Scholar] [CrossRef] [PubMed]
  2. Cinar, U.; Cetin Atalay, R.; Cetin, Y.Y. Human Hepatocellular Carcinoma Classification from H&E Stained Histopathology Images with 3D Convolutional Neural Networks and Focal Loss Function. J. Imaging 2023, 9, 25. [Google Scholar] [CrossRef] [PubMed]
  3. Riu, L.; Poulet, F.; Carter, J.; Bibring, J.-P.; Gondet, B.; Vincendon, M. The M3 project: 1-A global hyperspectral image-cube of the martian surface. Icarus 2019, 319, 281–292. [Google Scholar] [CrossRef]
  4. Kucheryavskiy, S. A new approach for discrimination of objects on hyperspectral images. Chemom. Intell. Lab. Syst. 2013, 120, 126–135. [Google Scholar] [CrossRef]
  5. Shi, P.; Jiang, Q.; Li, Z. Hyperspectral Characteristic Band Selection and Estimation Content of Soil Petroleum Hydrocarbon Based on GARF-PLSR. J. Imaging 2023, 9, 87. [Google Scholar] [CrossRef]
  6. Hao, J.; Zhang, Y.; Zhang, Y.; Wu, L. Prediction of antioxidant enzyme activity in tomato leaves based on microhyperspectral imaging technique. Opt. Laser Technol. 2024, 179, 111292. [Google Scholar] [CrossRef]
  7. Zander, P.D.; Wienhues, G.; Grosjean, M. Scanning Hyperspectral Imaging for In Situ Biogeochemical Analysis of Lake Sediment Cores: Review of Recent Developments. J. Imaging 2022, 8, 58. [Google Scholar] [CrossRef]
  8. Boldrini, B.; Kessler, W.; Rebner, K.; Kessler, R.-W. Hyperspectral imaging: A review of be practice, performance and pitfalls for in-line and on-line applications. J. Near Infrared Spectrosc. 2012, 20, 483–508. [Google Scholar] [CrossRef]
  9. Vohland, M.; Ludwig, M.; Thiele-Bruhn, S.; Ludwig, B. Quantification of Soil Properties with Hyperspectral Data: Selecting Spectral Variables with Different Methods to Improve Accuracies and Analyze Prediction Mechanisms. Remote Sens. 2017, 9, 1103. [Google Scholar] [CrossRef]
  10. Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep Learning Meets Hyperspectral Image Analysis: A Multidisciplinary Review. J. Imaging 2019, 5, 52. [Google Scholar] [CrossRef]
  11. Gruber, F.; Wollmann, P.; Grählert, W.; Kaskel, S. Hyperspectral Imaging Using Laser Excitation for Fast Raman and Fluorescence Hyperspectral Imaging for Sorting and Quality Control Applications. J. Imaging 2018, 4, 110. [Google Scholar] [CrossRef]
  12. Hossain, M.; Younis, M.; Robinson, A.; Wang, L.; Preza, C. Greedy Ensemble Hyperspectral Anomaly Detection. J. Imaging 2024, 10, 131. [Google Scholar] [CrossRef] [PubMed]
  13. Zhu, M.; Huang, D.; Hu, X.-J.; Tong, W.-H.; Han, B.-L.; Tian, J.-P.; Luo, H.-B. Application of hyperspectral technology in detection of agricultural products and food: A Review. Food Sci. Nutr. 2020, 8, 5206–5214. [Google Scholar] [CrossRef] [PubMed]
  14. Temiz, H.-T.; Ulaş, B. A Review of Recent Studies Employing Hyperspectral Imaging for the Determination of Food Adulteration. Photochem 2021, 1, 125–146. [Google Scholar] [CrossRef]
  15. Lin, C.; Hu, Y.; Liu, Z.; Peng, Y.; Wang, L.; Peng, D. Estimation of Cultivated Land Quality Based on Soil Hyperspectral Data. Agriculture 2022, 12, 93. [Google Scholar] [CrossRef]
  16. Gosavi, D.; Cheatham, B.; Sztuba-Solinska, J. Label-Free Detection of Human Coronaviruses in Infected Cells Using Enhanced Darkfield Hyperspectral Microscopy (EDHM). J. Imaging 2022, 8, 24. [Google Scholar] [CrossRef]
  17. Zhang, J. A Hybrid Clustering Method with a Filter Feature Selection for Hyperspectral Image Classification. J. Imaging 2022, 8, 180. [Google Scholar] [CrossRef]
  18. Yazdanfar, S.; Kenny, K.-B.; Tasimi, K.; Corwin, A.-D.; Dixon, E.-L.; Filkins, R.-J. Simple and robust image-based autofocusing for digital microscopy. Opt. Express 2008, 16, 8670–8677. [Google Scholar] [CrossRef]
  19. Krotkov, E. Focusing. Int. J. Comput. Vis. 1988, 1, 223–237. [Google Scholar] [CrossRef]
  20. Sun, Y.; Duthaler, S.; Nelson, B.-J. Autofocusing in computer microscopy: Selecting the optimal focus algorithm. Microsc. Res. Tech. 2004, 65, 139–149. [Google Scholar] [CrossRef]
  21. Knapper, J.; Collins, J.-T.; Stirling, J.; McDermott, S.; Wadsworth, W.; Bowman, R.-W. Fast, high-precision autofocus on a motorised microscope: Automating blood sample imaging on the OpenFlexure Microscope. J. Microsc. 2022, 285, 29–39. [Google Scholar] [CrossRef] [PubMed]
  22. Liu, X.-A.; Wang, W.-A.; Sun, Y. Dynamic evaluation of autofocusing for automated microscopic analysis of blood smear and pap smear. J. Microsc. 2007, 227, 15–23. [Google Scholar] [CrossRef]
  23. Mateos-Pérez, J.-M.; Redondo, R.; Nava, R.; Valdiviezo, J.-C.; Cristóbal, G.; Escalante-Ramírez, B.; Ruiz-Serrano, M.-J.; Pascau, J.; Desco, M. Comparative evaluation of autofocus algorithms for a real-time system for automatic detection of Mycobacterium tuberculosis. Cytometry 2012, 81A, 213–221. [Google Scholar] [CrossRef]
  24. Zhang, C.; Jia, D.; Wu, N. Autofocus method based on multi regions of interest window for cervical smear images. Multimeded Tools Appl. 2022, 81, 18783–18805. [Google Scholar] [CrossRef]
  25. Panicker, R.-O.; Soman, B.; Saini, G.; Rajan, J. A Review of Automatic Methods Based on Image Processing Techniques for Tuberculosis Detection from Microscopic Sputum Smear Images. J. Med. Syst. 2016, 40, 17. [Google Scholar] [CrossRef]
  26. Li, J. Autofocus searching algorithm considering human visual system limitations. Opt. Eng. 2005, 44, 113201. [Google Scholar] [CrossRef]
  27. Sanz, M.-B.; Sánchez, F.-M.; Borromeo, S. An algorithm selection methodology for automated focusing in optical microscopy. Microsc. Res. Tech. 2022, 85, 1742–1756. [Google Scholar] [CrossRef]
  28. Liang, Q.; Qu, Y.-F. A texture–analysis–based design method for self-adaptive focus criterion function. J. Microsc. 2012, 246, 190–201. [Google Scholar] [CrossRef]
  29. Fonseca, E.; Fiadeiro, P.; Pereira, M.; Pinheiro, A. Comparative analysis of autofocus functions in digital in-line phase-shifting holography. Appl. Opt. 2016, 55, 7663–7674. [Google Scholar] [CrossRef]
  30. Faundez-Zanuy, M.; Mekyska, J.; Espinosa-Duró, V. On the focusing of thermal images. Pattern Recognit. Lett. 2019, 32, 1548–1557. [Google Scholar] [CrossRef]
  31. Chun, M.-G.; Kong, S.-G. Focusing in thermal imagery using morphological gradient operator. Pattern Recognit. Lett. 2014, 38, 20–25. [Google Scholar] [CrossRef]
  32. Rudnaya, M.-E.; Mattheij, R.-M.-M.; Maubach, J.-M.-L. Evaluating sharpness functions for automated scanning electron microscopy. J. Microsc. 2009, 240, 38–49. [Google Scholar] [CrossRef] [PubMed]
  33. Bueno-Ibarra, M.-A.; Alvarez-Borrego, J.; Acho, L.; Cha’vez-Sanchez, M.-C. Fast autofocus algorithm for automated microscopes. Opt. Eng. 2005, 44, 6. [Google Scholar] [CrossRef]
  34. Nayar, S.K.; Nakagawa, Y. Shape from focus. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 824–831. [Google Scholar] [CrossRef]
  35. Pertuz, S.; Puig, D.; Garcia, M.-A. Analysis of focus measure operators for shape-from-focus. Pattern Recognit. 2013, 46, 1415–1432. [Google Scholar] [CrossRef]
  36. Santos, A.; Ortiz de Solórzano, C.; Vaquero, J.J.; Peña, J.M.; Malpica, N.; del Pozo, F. Evaluation of autofocus functions in molecular cytogenetic analysis. J. Microsc. 1997, 188, 264–272. [Google Scholar] [CrossRef]
  37. Brazdilova, S.-L.; Kozubek, M. Information content analysis in automated microscopy imaging using an adaptive autofocus algorithm for multimodal functions. J. Microsc. 2009, 236, 194–202. [Google Scholar] [CrossRef]
  38. Shah, M.-I.; Mishra, S.; Rout, C. Establishment of hybridized focus measure functions as a universal method for autofocusing. J. Biomed. Opt. 2017, 22, 126004. [Google Scholar] [CrossRef]
  39. Wan, T.; Zhu, C.; Qin, Z. Multifocus image fusion based on robust principal component analysis. Pattern Recognit. Lett. 2013, 34, 1001–1008. [Google Scholar] [CrossRef]
  40. Wu, G.; Liu, Z.; Fang, E.; Yu, H. Reconstruction of spectral color information using weighted principle component analysis. Optik 2015, 126, 1249–1253. [Google Scholar] [CrossRef]
  41. Kang, X.; Xiang, X.; Li, S.; Benediktsson, J.-A. PCA-Based Edge-Preserving Features for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7140–7151. [Google Scholar] [CrossRef]
  42. Uddin, M.P.; Mamun, M.A.; Hossain, M.A. PCA-based feature reduction for hyperspectral remote sensing image classification. IETE Tech. Rev. 2021, 38, 377–396. [Google Scholar] [CrossRef]
  43. Moore, B.E.; Gao, C.; Nadakuditi, R.R. Panoramic Robust PCA for Foreground–Background Separation on Noisy, Free-Motion Camera Video. IEEE Trans. Comput. Imaging 2019, 5, 195–211. [Google Scholar] [CrossRef]
  44. Moeller, S.; Pisharady, P.K.; Ramanna, S.; Lenglet, C.; Wu, X.; Dowdle, L.; Yacoub, E.; Uğurbil, K.; Akçakaya, M. NOise reduction with DIstribution Corrected (NORDIC) PCA in dMRI with complex-valued parameter-free locally low-rank processing. NeuroImage 2021, 226, 117539. [Google Scholar] [CrossRef]
  45. MATLAB®. Available online: https://www.mathworks.com/ (accessed on 20 September 2024).
  46. Nasibov, H.; Kholmatov, A.; Hacizade, F. Investigation of autofocus algorithms for Vis-NIR and NIR hyperspectral imaging microscopes. In Proceedings of the NIR 2013—16th International Conference on Near Infrared Spectroscopy, La Grande-Motte, France, 2–7 June 2013; pp. 595–601. [Google Scholar]
  47. Peleg, M.; McClements, J. Measures of line jaggedness and their use in foods textural evaluation. Crit. Rev. Food Sci. Nutr. 1997, 37, 491–518. [Google Scholar] [CrossRef]
  48. Nouri, D.; Lucas, Y.; Treuillet, S. Calibration and test of a hyperspectral imaging prototype for intra-operative surgical assistance. In Proceedings of the SPIE 8676, Medical Imaging, Lake Buena Vista, FL, USA, 9–14 February 2013. [Google Scholar] [CrossRef]
Figure 1. (a) A characteristic FM curve; (b) a schematic of HSIMs: IC—imaging camera, VM—precise vertical translation stage, SG—spectrograph, OF—optical fiber, O—objective lens, IRL—infrared lamp, CL—collimated light source, S—sample, ST—sample table, LS—translation linear stages, PS—power supply, PC—personal computer; (c) in-focus images of the samples used in the experiments.
Figure 1. (a) A characteristic FM curve; (b) a schematic of HSIMs: IC—imaging camera, VM—precise vertical translation stage, SG—spectrograph, OF—optical fiber, O—objective lens, IRL—infrared lamp, CL—collimated light source, S—sample, ST—sample table, LS—translation linear stages, PS—power supply, PC—personal computer; (c) in-focus images of the samples used in the experiments.
Jimaging 10 00240 g001
Figure 2. (a) The view of the vis-NIR (left) and NIR (right) hyperspectral imaging systems (HSISs); (b) examples of vis-NIR soil hyperspectral images recorded with HSIS1: at 50% (top), 90% (middle), and in-focus (bottom) positions of the focus curve; (c) examples of NIR soil hyperspectral images recorded with HSIS2: at 50% (top), 90% (middle), and in-focus (bottom) positions of the focus curve.
Figure 2. (a) The view of the vis-NIR (left) and NIR (right) hyperspectral imaging systems (HSISs); (b) examples of vis-NIR soil hyperspectral images recorded with HSIS1: at 50% (top), 90% (middle), and in-focus (bottom) positions of the focus curve; (c) examples of NIR soil hyperspectral images recorded with HSIS2: at 50% (top), 90% (middle), and in-focus (bottom) positions of the focus curve.
Jimaging 10 00240 g002
Figure 3. Validation phase: the normalized values of the FMs of an image stack, recorded with the visible-range video system (with the removed spectrograph from the HSIM1) at 100× optical magnification. For the abbreviations of FMs, please refer to Section 2.2.
Figure 3. Validation phase: the normalized values of the FMs of an image stack, recorded with the visible-range video system (with the removed spectrograph from the HSIM1) at 100× optical magnification. For the abbreviations of FMs, please refer to Section 2.2.
Jimaging 10 00240 g003
Figure 4. Validation phase: the normalized values of the FMs of an image stack, recorded with the NIR-range video system (with the removed spectrograph from the HSIS2) at 1× optical magnification.
Figure 4. Validation phase: the normalized values of the FMs of an image stack, recorded with the NIR-range video system (with the removed spectrograph from the HSIS2) at 1× optical magnification.
Jimaging 10 00240 g004
Figure 5. An example of an image profile along the spatial axis (a) and the spectral axis (b), where the violet curves correspond to well-focused images, while the red and green curves represent defocused images with focus measures at 50% and 90% of the maximum, respectively.
Figure 5. An example of an image profile along the spatial axis (a) and the spectral axis (b), where the violet curves correspond to well-focused images, while the red and green curves represent defocused images with focus measures at 50% and 90% of the maximum, respectively.
Jimaging 10 00240 g005
Figure 6. The normalized values of the FMs of an image stack, recorded with the vis-NIR-range hyperspectral system HSIS1 at 0.5× optical magnification.
Figure 6. The normalized values of the FMs of an image stack, recorded with the vis-NIR-range hyperspectral system HSIS1 at 0.5× optical magnification.
Jimaging 10 00240 g006
Figure 7. The normalized values of the FMs of an image stack, recorded with the vis-NIR-range hyperspectral system HSIM1 at 100× optical magnification.
Figure 7. The normalized values of the FMs of an image stack, recorded with the vis-NIR-range hyperspectral system HSIM1 at 100× optical magnification.
Jimaging 10 00240 g007aJimaging 10 00240 g007b
Figure 8. The normalized values of the FMs of an image stack, recorded with the NIR-range hyperspectral system HSIM2 at 25× optical magnification.
Figure 8. The normalized values of the FMs of an image stack, recorded with the NIR-range hyperspectral system HSIM2 at 25× optical magnification.
Jimaging 10 00240 g008
Figure 9. Noisy Hyperspectral Images. The normalized values of the FMs of an image stack, recorded with the NIR-range hyperspectral system HSIS2 at 0.5× optical magnification.
Figure 9. Noisy Hyperspectral Images. The normalized values of the FMs of an image stack, recorded with the NIR-range hyperspectral system HSIS2 at 0.5× optical magnification.
Jimaging 10 00240 g009
Table 1. Summary of HSIS.
Table 1. Summary of HSIS.
SetupHISM1HISM2HSIS1HSIS2
Spectral range400–1000 nm900–1700 nm400–1000 nm900–1700 nm
Entrance slit, width × height30 μm × 9.8 mm50 μm × 9.8 mm25 μm × 9.8 mm25 μm × 9.8 mm
Spectral resolution3.8 nm/pixel6.7 nm/pixel1.3 nm/pixel4.1 nm/pixel
Imaging arrayPixel Fly, PCOXenIcs, XEVA-17, InGaAsEO, DCC3240xFLIR, A6261, InGaAs
Array resolution, pixels1024 × 1392256 × 3201280 × 1024640 × 512
Dynamic range14 bits12 bits12 bits14 bits
Magnification50× to 100×25× to 50×0.1× to 0.5×0.1× to 0.5×
Table 2. Validation phase. Ranking of FMs for vis-NIR conventional video images.
Table 2. Validation phase. Ranking of FMs for vis-NIR conventional video images.
FMAccuracyUnimodalityWidth
at 50%
Width
at 90%
Smoothness Overall ScoreRanking
TEN10.100.000.590.370.590.921
BOD0.150.000.540.460.580.932
TEN20.100.000.560.410.620.943
BRE0.050.050.590.470.600.974
ABG0.050.050.680.600.591.085
SAG0.110.110.590.530.811.146
EIG0.320.120.620.580.771.207
SML0.590.330.780.810.871.578
EOL0.201.000.670.701.001.739
DLF1.000.451.001.000.751.9410
WL30.100.000.550.390.490.841
WL20.210.190.570.470.510.942
WL10.180.110.590.480.530.953
FTF0.830.920.690.760.671.744
DCT1.001.000.740.900.651.945
MD-DCT0.790.711.001.001.002.036
NVR0.410.450.430.310.711.081
ENT0.310.400.500.480.751.142
VOL50.400.400.610.590.851.333
ACF0.510.550.690.550.711.364
VAR0.460.600.850.770.781.585
WHS1.001.001.001.000.922.206
Table 3. Validation phase. Ranking of FMs for NIR conventional video images.
Table 3. Validation phase. Ranking of FMs for NIR conventional video images.
FMAccuracyUnimodalityWidth
at 50%
Width
at 90%
Smoothness Overall ScoreRanking
TEN10.300.020.550.450.500.921
TEN20.210.060.610.590.450.992
ABG0.230.020.780.560.491.103
BRE0.580.070.590.520.591.144
BOD0.510.040.670.610.511.165
SAG0.620.140.690.730.611.346
EIG0.420.320.820.880.671.477
EOL0.730.320.960.781.001.788
DLF1.000.910.870.930.852.049
SML0.631.001.001.000.912.0610
WL30.330.120.500.480.590.981
WL20.370.220.570.510.450.992
WL10.410.320.750.650.591.273
FTF0.630.320.960.870.671.624
DCT0.851.000.870.930.952.065
MD-DCT1.000.911.001.001.002.206
VOL50.400.210.610.490.651.111
ACF0.410.090.590.650.601.142
ENT0.310.420.560.600.751.233
NVR0.480.130.630.710.711.294
VAR0.761.000.850.771.001.975
WHS1.000.961.001.000.912.186
Table 4. The ranking of the FMs for the vis-NIR spectral band hyperspectral images.
Table 4. The ranking of the FMs for the vis-NIR spectral band hyperspectral images.
FMAccuracyUnimodalityWidth
at 50%
Width
at 90%
Smoothness Overall ScoreRanking
BRE0.190.070.390.350.300.641
BOD0.210.100.370.350.350.662
EIG0.300.120.320.380.320.673
SAG0.290.140.290.330.410.684
TEN10.230.080.450.360.310.705
TEN20.210.090.410.320.420.716
ABG0.290.110.370.400.490.807
DLF0.670.450.870.930.551.608
EOL1.001.001.000.881.002.199
SML0.931.001.001.000.972.1910
WL30.230.080.380.350.310.651
WL20.240.120.390.410.310.702
WL10.270.120.410.430.340.753
FTF0.531.001.000.870.671.874
DCT0.750.900.811.000.951.985
MD-DCT1.000.910.931.001.002.176
ACF0.410.200.330.340.250.701
VOL50.450.310.610.590.361.072
ENT0.410.320.680.560.451.123
VAR1.000.431.000.670.711.774
NVR0.940.370.911.000.791.865
WHS0.801.000.830.811.002.006
Table 5. The ranking of the FMs for the NIR spectral band hyperspectral images.
Table 5. The ranking of the FMs for the NIR spectral band hyperspectral images.
FMAccuracyUnimodalityWidth
at 50%
Width
at 90%
Smoothness Overall ScoreRanking
BOD0.050.040.370.350.510.721
TEN10.150.020.600.520.500.952
TEN20.250.020.640.580.451.013
ABG0.100.020.700.620.491.064
BRE0.190.070.610.680.591.115
SAG0.520.240.590.600.611.196
EIG0.320.380.420.431.001.277
SML1.000.800.500.550.711.648
DLF0.640.710.660.930.861.729
EOL0.731.001.001.000.852.0610
WL30.130.120.500.470.400.811
WL20.160.120.590.560.450.952
WL10.310.320.620.630.571.143
FTF1.001.000.860.871.002.124
DCT0.850.900.670.730.701.735
MD-DCT1.000.801.001.000.882.106
NVR0.440.230.590.570.411.041
ENT0.490.120.610.600.351.052
ACF0.570.310.600.550.681.243
VOL50.650.290.710.600.631.334
VAR1.001.000.850.860.832.045
WHS0.910.801.001.001.002.116
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nasibov, H. Evaluation of Focus Measures for Hyperspectral Imaging Microscopy Using Principal Component Analysis. J. Imaging 2024, 10, 240. https://doi.org/10.3390/jimaging10100240

AMA Style

Nasibov H. Evaluation of Focus Measures for Hyperspectral Imaging Microscopy Using Principal Component Analysis. Journal of Imaging. 2024; 10(10):240. https://doi.org/10.3390/jimaging10100240

Chicago/Turabian Style

Nasibov, Humbat. 2024. "Evaluation of Focus Measures for Hyperspectral Imaging Microscopy Using Principal Component Analysis" Journal of Imaging 10, no. 10: 240. https://doi.org/10.3390/jimaging10100240

APA Style

Nasibov, H. (2024). Evaluation of Focus Measures for Hyperspectral Imaging Microscopy Using Principal Component Analysis. Journal of Imaging, 10(10), 240. https://doi.org/10.3390/jimaging10100240

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop