Next Article in Journal
Age Assessment through Root Lengths of Mandibular Second and Third Permanent Molars Using Machine Learning and Artificial Neural Networks
Previous Article in Journal
NaF-PET Imaging of Atherosclerosis Burden
Previous Article in Special Issue
The Optimization of the Light-Source Spectrum Utilizing Neural Networks for Detecting Oral Lesions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Impact of Wavelet Kernels on Predictive Capability of Radiomic Features: A Case Study on COVID-19 Chest X-ray Images

1
Department of Biomedicine, Neuroscience and Advanced Diagnostics (BiND), University of Palermo, 90127 Palermo, Italy
2
Institute for High-Performance Computing and Networking, National Research Council (ICAR-CNR), 90146 Palermo, Italy
3
Faculty of Engineering and Architecture, University Kore of Enna, 94100 Enna, Italy
*
Author to whom correspondence should be addressed.
J. Imaging 2023, 9(2), 32; https://doi.org/10.3390/jimaging9020032
Submission received: 21 December 2022 / Revised: 15 January 2023 / Accepted: 28 January 2023 / Published: 30 January 2023

Abstract

:
Radiomic analysis allows for the detection of imaging biomarkers supporting decision-making processes in clinical environments, from diagnosis to prognosis. Frequently, the original set of radiomic features is augmented by considering high-level features, such as wavelet transforms. However, several wavelets families (so called kernels) are able to generate different multi-resolution representations of the original image, and which of them produces more salient images is not yet clear. In this study, an in-depth analysis is performed by comparing different wavelet kernels and by evaluating their impact on predictive capabilities of radiomic models. A dataset composed of 1589 chest X-ray images was used for COVID-19 prognosis prediction as a case study. Random forest, support vector machine, and XGBoost were trained (on a subset of 1103 images) after a rigorous feature selection strategy to build-up the predictive models. Next, to evaluate the models generalization capability on unseen data, a test phase was performed (on a subset of 486 images). The experimental findings showed that Bior1.5, Coif1, Haar, and Sym2 kernels guarantee better and similar performance for all three machine learning models considered. Support vector machine and random forest showed comparable performance, and they were better than XGBoost. Additionally, random forest proved to be the most stable model, ensuring an appropriate balance between sensitivity and specificity.

1. Introduction

Quantitative imaging biomarkers, i.e., radiomic features, can be used to extract information complementary to the visual approach of the radiologist [1]. Radiomics has been exploited in different scenarios to support the decision making of clinicians at different stages of the care process, from diagnosis to prognosis. With more details, radiomic signatures can be used for the diagnosis of several pathologies, to predict response to therapy, and to categorize clinical outcomes in general. Although several best practices [2] and standardization initiatives [3] have been proposed, full reproducibility of the radiomic process is still lacking. The imaging used, acquisition protocol, feature extraction setting, preprocessing, and the machine learning (ML) workflow are variables undermining the reproducibility of the radiomic process. Several researchers have tried to evaluate the robustness and the predictive capabilities of radiomic features, depending on specific external parameters (e.g., segmentation method, quantization level, and preprocessing steps) [4,5,6]. This already complex scenario becomes even more complicated when the classical radiomic feature set is extended by also considering high-level features, such as wavelet transforms, Laplacian filters (LoG), and intensity transformation (e.g., logarithm, exponential, gradient, etc.).
Wavelet-derived features showed strong predictive capabilities in several contexts: tumor-type prediction of early stage lung nodules in CT [7], neoadjuvant chemotherapy treatment prediction for breast cancer in MRI [8], low-dose rate radiotherapy treatment response prediction of gastric carcinoma in CT [9], liver cirrhosis detection [10], glioblastoma multiforme differentiation from brain metastases in MRI [11], and grading of COVID-19 pulmonary lesions in CT [12]. The wavelet-derived features are calculated on the image decompositions—four for 2D images (e.g., X-ray, mammography, ultrasound, etc.) and eight for 3D volumes (e.g., CT, MRI, etc.)—providing multi-resolution images. This leads to a substantial increase in the number of features that, if not properly managed, can bring predictive systems to incur the curse of dimensionality [13]. In addition, several families of wavelet transforms exist, some improved for noise reduction, others for image compression; however, in general, all provide a multi-resolution representation of the initial image. Considering that the predictive ability of wavelet-derived features overcomes the original ones, analyzing and comparing the behavior of wavelet kernels are worthwhile to provide a recommendation for their use.
In radiomics, wavelets are often used without paying attention to the type of kernel involved: the commonly followed approach is to use the default kernel for feature extraction, without evaluating a-priori which is more suitable for the specific clinical scenario. Few researchers have approached this problem. In [14], wavelet kernels were compared to evaluate the role of the CT radiomic features for lung cancer prediction: In [15], a similar approach was used for the diagnosis of colorectal cancer patients in contrast-enhanced CT. Both studies used CT imaging, a modality that can provide images with higher resolution and more defined details, compared with CXR projective imaging.
In this study an in-depth analysis is performed by comparing different wavelet kernels and by evaluating their impact on predictive capabilities of radiomic models. As case study, three machine learning radiomics models were implemented to predict the prognosis of COVID-19 patients from chest X-ray (CXR) images. The Biorthogonal, Coiflets, Daubechies, Discrete Meyer, Haar, Reverse Biorthogonal, and Symlets wavelet families were considered to quantify and compare the predictive performance of the radiomic features. The radiomic features were extracted from the decomposed images, preprocessed, selected, and then used to train several ML models, including random forest (RF), XGBoost (XGB), and support vector machine (SVM). The models and the different wavelets were compared to evaluate the most predictive kernel.
The remainder of this paper is structured as follows: Section 2 describes the dataset used, the extraction and preprocessing of radiomic features, and the model training. Section 3 provides the results obtained for the wavelet kernels and trained models. Section 4 discuss the experimental findings. Finally, Section 5 outlines the study conclusions.

2. Materials and Methods

2.1. Dataset Characteristics and Lung Delineation

The dataset used is composed of 1589 CXR images of COVID-19 patients, labeled as ’SEVERE’ and ’MILD’ disease, according to [16]. This dataset is split into 1103 and 486 patients used for the training and test phases, respectively. This partition was established by the organizing committee of the COVID CXR Hackathon competition [17], who made these datasets available. In more detail, the training dataset (1103 samples) include 568 severe and 535 mild; the test dataset (486 samples) include 180 severe and 306 mild. From a visual evaluation, it was possible to note that the multicentric dataset (collected by six different hospitals) is heterogeneous in terms of image size, quality, gray levels distribution, and origin (some are native digital images, whereas others are obtained by scanning traditional X-ray films). In particular, regarding the size: for the training dataset, the most frequent size is 2336 × 2836 pixels ( 35.7 % ); the other images have variable sizes ( 1396 4280 × 1676 4280 pixels). For the test set (composed of 486 images), 88.27 % of the images have 2336 × 2836 pixels; the others have variable sizes ( 1648 3027 × 1440 3000 pixels). According to [16], in our study, all the images were resized at 1024 × 1024 . The CXR images were stored in .PNG format with a 96 DPI resolution (pixel size 0.265 × 0.265 mm), and no metadata related to acquisition details are available.
To extract the radiomic features, the regions of interest (ROIs) containing the lungs have to be identified. To achieve this, a MATLAB®-coded custom tool was implemented to delineate the lung ROIs: in particular, a semi-automatic delineation modality for the identifying of the maximum elliptical ROI contained within the lung was implemented. This choice was motivated by the need to focus the attention on the central region of the lung, excluding peripheral zones. In Figure 1 two segmentation examples. Due to the excessive heterogeneity of the CXR dataset, automatic lung segmentation approaches did not provide satisfactory results. For this reason, it was decided to implement a tool that can easily support clinicians during the image annotation procedure. Specifically, the implemented semi-automatic algorithm allows the automatic identification of the bounding-box containing the lung and determining the maximum elliptical ROI contained within it. The clinician can decide whether to accept it, if the result is satisfactory, or to modify it to find a more suitable fitting between the elliptic ROI and the lung area by changing orientation and size of the ellipse. Because this is a supervised semi-automatic approach (each segmentation is directly validated), and considering the experience of the clinicians who supported this study, no evaluation step was performed. We decided to implement an ad hoc computer-assisted tool that can simply and intuitively guide clinicians through the steps of lung segmentation, from CXR image selection to segmentation mask storage.

2.2. Wavelet Transform

The widespread use of wavelet transforms—used in several applications concerning signal and image processing—is due to their ability to capture information in both the frequency and time domains. In this study, discrete wavelet transform (DWT) was applied to CXR images. The image passed through high-pass h ψ and low-pass h ϕ filtering operations, decomposing the images into high-frequency (details) and low-frequency components (approximation). The decomposition of the image into subimages at different resolution allows for multi-resolution analysis [18,19]. For this reason, DWT has found numerous applications in image processing, focusing on denoising [20] and compression [21,22]. DWT is computed through two functions, the scaling function and the wavelet function[23]. For a 2D signal f ( x , y ) of size M × N (such as the CXR images considered in this study), the DWT is defined as:
W ϕ ( j o , m , n ) = 1 M N x = 0 M 1 y = 0 N 1 f ( x , y ) ϕ j o , m , n ( x , y )
W ψ I ( j , m , n ) = 1 M N x = 0 M 1 y = 0 N 1 f ( x , y ) ψ j , m , n I ( x , y )
where:
  • Equation (1) represents the scaled version of the image, computed with the scaling function defined in Equation (3);
  • Equation (2) defines the horizontal (H), vertical (V), and diagonal (D) representation of image ( I = { H , V , D } ), computed with the wavelet function defined in Equation (4)).
ϕ j o , m , n ( x , y ) = 2 j / 2 ϕ ( 2 j x m , 2 j y n )
ψ j , m , n I ( x , y ) = 2 j / 2 ψ ( 2 j x m , 2 j y n )
Finally, the combined application of the scaling function ( ϕ ) and the wavelet function ( ψ ) obtains four image decompositions ( L L , L H , H L , and H H ) for 2D transforms, as indicated in Equations (5)–(8). The same formulas can be adapted and extended for the 3D case, to consider volumetric imaging (e.g., MRI, CT, etc.). DWT then depends on the low-pass h ϕ and high-pass h ψ kernel chosen for decomposition.
L L = ϕ ( x , y ) = ϕ ( x ) ϕ ( y )
L H = ψ H ( x , y ) = ψ ( x ) ϕ ( y )
H L = ψ V ( x , y ) = ϕ ( x ) ψ ( y )
H H = ψ D ( x , y ) = ψ ( x ) ψ ( y )
In this study the Biorthogonal (Bior1.5), Coiflets (Coif1), Daubechies (Db3), Discrete Meyer (Dmey), Haar, Reverse Biorthogonal (Rbio1.5), and Symlets (Sym2) wavelet families [24] were considered. The following are the main applications of wavelet families:
  • Biorthogonal: commonly used for denoising, in particular when white Gaussian noise is present [25];
  • Reverse Biorthogonal: used for compression [26] and denoising [27];
  • Coiflet: used for compression [28] and denoising [29];
  • Daubechies: provides excellent performance in compression and are popular choice in medical imaging applications [30];
  • Discrete Meyer: in general used for multi-resolution analysis [31] and some variants for edge and blocking artifact reduction [32];
  • Haar: is the first introduced and several generalizations and modifications were proposed [33]. It is one of the most widely used and has many medical imaging applications, including image fusion [34] and compression in radiography [35], CT, and MRI [36];
  • Symlets: is a modified version of Daubechies wavelets with increased symmetry [37], used for signal decomposition including characterization of fabric texture [38].
As each wavelet family consists of several kernels, and they were experimentally selected to qualitatively and visually maintain decomposed images similar to the original image. Table 1 summarizes the chosen kernels and the respective number of coefficients. Except for Dmey, kernels with a number of coefficients less than or equal to 10 were chosen.

2.3. Radiomic Features Extraction

The radiomic features were extracted from CXR images using PyRadiomics [39] and segmentation masks were obtained through the semi-automated tool described in Section 2.1. The extracted radiomic features belong to the following six categories:
  • First order (FO) intensity histogram statistics;
  • Gray level co-occurrence matrix (GLCM) [40,41];
  • Gray level run length matrix (GLRLM) [42];
  • Gray level size zone matrix (GLSZM) [43];
  • Gray level dependence matrix (GLDM) [44];
  • Neighboring gray tone difference matrix (NGTDM) [45].
Moreover, each ROI was filtered considering all families of the wavelet transforms discussed above. Specifically, for each image and for each wavelet family, four decompositions ( L L , L H , H L , and H H ) were calculated, and then the features were extracted by obtaining a total of 93 × 4 = 372 features. Finally, the original radiomic features (without wavelet filtering) were also extracted to compare the wavelet-derived vs. original predictive capabilities.

2.4. Radiomic Features Preprocessing

To obtain a subset of non-redundant features with relevant information content, preprocessing and selection were performed with the following steps [2]:
  • Near-zero variance analysis: aimed at removing features with low information content. This operation considered a variance cutoff of 0.01: features with a variance less than or equal to this threshold were discarded;
  • Correlation analysis: aimed at removing highly correlated features, by means of the Spearman correlation for pairwise feature comparison. For each set of N correlated features, N-1 were removed. Specifically, the correlation matrix was first calculated, and then it was analyzed according to the following decomposition priority: L H , H L , H H , and L L . As values larger than 0.80 are commonly used for Spearman correlation [46,47,48,49], and a threshold of 0.85 was chosen.
  • Statistical analysis: the Mann–Whitney U test was used to test the difference between mild and severe distribution, computing the p-value for each features selected from the previous step. The p-value threshold was set to 0.05.

2.5. Features Selection and Model Training

To select the most discriminating features, the sequential feature selector (SFS) [50] algorithm was used. The sequential feature selector was set in forward mode, which, at each step, includes a feature, and in floating mode, which, after the inclusion, performs the exclusion, allowing us to consider more feature subset combinations. This allowed for the selection of the best radiomic signature for each model considered (i.e., RF, SVM, and XGB). The SFS algorithm was applied considering a 10-fold stratified cross validation (CV) and using all the selected features in the preprocessing step.
The experiments were conducted in the Python 3.7 environment, using the scikit-learn 1.0.2 and xgboost 1.2.1 libraries for model training. In particular:
  • RF was trained using the bootstrap technique with 100 estimators and the Gini criterion.
  • SVM was trained setting the regularization parameter C = 1.0 , considering the radial basis function as kernel, the coefficient γ = 1 / ( n f e a t u r e s × σ ) , the shrinking method [51], and the probability estimates to enable the AUROC computation. In addition, for SVM, the features were standardized before the training.
  • XGB was trained using 100 estimators, d e p t h m a x = 6 , and ‘gain’ as the importance type. In addition, the binary logistic loss function was used to model the binary classification problem, considering a learning rate of η = 0.3 .
Accuracy, sensitivity, specificity, and AUROC were calculated for all the three ML models and all the wavelet kernels considered. In addition, for a precise estimation of the trained models, in the training phase, performance was calculated by considering a stratified CV, repeated 20 times on the 1103 samples dataset. Successively, to evaluate the models generalization capability on unseen data, a test phase was performed on the 486 samples dataset.

3. Results

3.1. Features Preprocessing

Figure 2 shows the selected radiomic features after each preprocessing step (upper), considering each wavelet decomposition (lower). After near-zero variance analysis, Dmey, Bior1.5, and Haar kernels had the most features selected, while, at the end of preprocessing, the number of features was comparable for each kernel used. As expected, a marked overlap between the selected features belonging to the L L decomposition was observed (Table 2). This finding resulted from the nature of the L L decomposition, which is essentially derived from a resizing. For the other decompositions ( L H , H L , and H H ), this overlap decreased because the application of the kernel corrupted the image in different ways among the various wavelet families. For each kernel, the complete list of radiomic features remaining after the preprocessing phase is reported in Table 2.

3.2. Features Selection

The radiomic features remaining at the end of the preprocessing phase represented the input of the SFS wrapper method, which were used to select the most discriminating radiomic signature for the SVM, RF, and XGB models. The number of features maximizing accuracy was selected. For equal accuracy among the different radiomic signatures, the one with the lower standard deviation was selected. The features selection stage allows us to select only the discriminating features: in fact, considering all available features (at the end of preprocessing) can lead to a performance degradation. Table 3 summarizes the number of features for each wavelet kernel and ML model used for training. On average, a signature consisting of 12–15 features was selected. Additionally, overall, the decision tree-based models (RF and XGB) performed better, with more features than SVM. The details of the features selected for each wavelet kernel and for each ML model are reported in Appendix A (Table A1, Table A2, Table A3).

3.3. Predictive Model Results

Table 4, Table 5, Table 6 and Table 7 show the accuracy, sensitivity, specificity, and AUROC obtained in the experimental trials, both in the training and testing phases. The reported metrics obtained in the training are represented as mean ± standard deviation, because the averaged values calculated considering 20 repetitions of the 10-fold stratified CV.
To verify whether the values obtained by CV repetitions are statistically different, the ANOVA test was used: for each model (i.e., XGB, SVM, and RF), the accuracy values obtained for each fold were compared, considering the used kernels as groups, obtaining the p-values < < 0.05.
Focusing on wavelet kernels, Db3, Dmey, and Rbio1.5 proved to be the worst for all three ML models. This assessment resulted from the high imbalance between sensitivity and specificity, suggesting overfitted models (high sensitivity vs. low specificity or vice versa). The AUROC is the most widely used index of global diagnostic accuracy, since higher values correspond to a better selective ability of the biomarkers [52]. Excluding Db3, Dmey, and Rbio1.5 for their unbalanced performance, overlapping AUROC values were obtained in tests for Bior1.5, Coif1, Haar, and Sym2 kernels.
Focusing on ML models, XGB was the least accurate model, compared with SVM and RF, which showed comparable accuracy on the test set. Moreover, RF, considering the smaller gap in training and testing performance, had a strong generalization ability. In summary, wavelet-derived features were more predictive than the original ones. This result was confirmed for all three ML models employed.
Figure 3, shows the confusion matrices obtained by the three ML classifiers considering Haar as the wavelet kernel.

4. Discussion

Radiomic features can support the radiologists by providing a quantitative viewpoint that is complementary to the human visual perspective. Although many researchers have used classical radiomic features (i.e., FO, GLCM, GLRLM, GLSZM, and GLDM), many others have achieved increased predictive power by exploiting high-level features, computed from filtered images by means of wavelet transforms, LoG filters, and intensity transformation. In particular, the wavelet-derived features have demonstrated their predictive capability in several contexts [7,8,9,11,12]. Despite their widespread use in the literature, comparing the different wavelets kernels is difficult because of the few studies [14,15] focused on the problem. Consequently, which wavelet kernel has stronger discriminatory power is unclear.
In this study, a CXR dataset was used to evaluate the impact of wavelet kernels on the radiomic model performance for predicting COVID-19 prognosis. Despite their projective nature, CXR images (consisting of less detailed images than volumetric CT series) are crucial to sustainably (e.g., cheaply and quickly) support healthcare systems. In this context, the ability of wavelets to provide multi-resolution imaging can be used to improve the predictive capabilities of radiographic imaging. Bior1.5, Coif1, Haar, and Sym2 kernels were the best, in terms of the predictivity for all three ML models used (i.e., XGB, SVM, and RF). Conversely, Db3, Dmey, and Rbio1.5 showed a serious imbalance between sensitivity and specificity, suggesting overfitted models. Finally, RF showed the strongest generalization ability, demonstrating less performance degradation between the training and testing phases. Figure 4 shows an example of the Haar transform in the four decompositions ( L L , L H , H L , and H H ). L L represents the rescaled image, which retains features approximating the original image. For the other decompositions ( L H , H L , and H H ), however, the image shows completely different characteristics, especially in the lung regions. The results obtained encourage the use of higher-level features, such as those obtained by wavelet transforms. The models trained with wavelet-derived features outperformed the model trained with original features. This is the proof that the ability to provide a multi-resolution image representation improves the prediction performance of the ML models. The findings of this study, applied to the prognosis of COVID-19, are a starting point for other pathologies analyzed by radiographic imaging. However, further analysis is required when volumetric imaging (e.g., CT and MRI) is used, in which the wavelet transform involves three spatial components.

5. Conclusions

Wavelet transforms represent a powerful tool for extracting biomarkers in radiomics. This study showed how the correct choice of the wavelet kernel used for the filtering and extraction of radiomic features improves the classification process, with respect to the use of the original features. At present, the discussion between shallow learning and deep learning is still open, especially in clinical scenarios, where the amount of available samples is not sufficient to train and use deep architectures. In this context, traditional ML techniques are also experiencing a second wave, in relation to the increased possibility of obtaining interpretable features and explainable models. In addition, traditional ML does not require a large amount of data to train the model.
Wavelet-derived features must be used wisely. In fact, having a reduced dataset limits the number of features that can be extracted and used to avoid incurring the `curse of dimensionality’ [13]. This phenomenon shows that, with a fixed number of training samples, the average (expected) predictive power of a classifier improves as the dimension and the number of used features increase. This is due to the model’s overfitting on high-dimensionality and redundant data, which leads to improved performance on train data, but reduces the generalization capabilities on unseen test data. Although the study mainly focuses on evaluating how predictivity impacts the wavelet kernel changes, the interpretability of radiomic features must also be studied. In clinical contexts, explainable models are crucial, so that the models can be clinically validated and compared with the medical literature. Explainability improves the usability and acceptability of artificial intelligence (AI) models, as it allows the users to be involved in the debugging and model building processes [53]. In many intensive decision-based tasks, the interpretability of an AI-based system may emerge as an indispensable feature [54]. However, radiomic features have the disadvantages of being low-level, less abstract than deep features, and less informative, which can degrade model performance. This phenomenon can be mitigated by considering higher-level radiomic features, such as wavelet-derived ones, where the interpretability is sacrificed for the benefit of performance. If correlating the quantitative value with clinical meaning is easy with the original radiomic features, for wavelet-derived features, this task becomes more complicated because the physician does not use a wavelet transform in regular activities. However, their use achieves a trade-off between performance and interpretability.

Author Contributions

Conceptualization, F.P. and C.M.; methodology, C.M.; software, F.P.; validation, F.P. and C.M.; formal analysis, C.M.; investigation, F.P.; resources, F.P.; data curation, F.P. and C.M.; writing—original draft preparation, F.P. and C.M.; writing—review and editing, V.C. and S.V.; visualization, V.C.; supervision, V.C. and S.V.; project administration, S.V.; funding acquisition, S.V. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the University of Palermo, Italy. Grant EUROSTART, CUP B79J21038330001, project TRUSTAI4NCDI.

Institutional Review Board Statement

Ethical review and approval were waived due to the retrospective nature of the study.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
AUROCArea Under Receiver Operating Characteristic
BiorBiorthogonal Wavelet Kernel
CoifCoiflets Wavelet Kernel
CTComputed Tomography
CVCross Validation
CXRChest X-ray
DbDaubechies Wavelet Kernel
DmeyDiscrete Meyer Wavelet kernel
DPIDots Per Inch
DWTDiscrete Wavelet Transform
FOFirst Order
GLCMGray Level Co-occurrence Matrix
GLDMGray Level Dependence Matrix
GLRLMGray Level Run Length Matrix
GLSZMGray Level Size Zone Matrix
HaarHaar Wavelet Kernel
LoGLaplacian of Gaussian
MRIMagnetic Resonance Imaging
NGTDMNeighboring Gray Tone Difference Matrix
RbioReverse Biorthogonal Wavelet Lernel
RFRandom Forest
ROIRegion Of Interest
SVMSupport Vector Machine
SymSymlets Wavelet kernel
XGBXGBoost

Appendix A

Table A1. Complete list of radiomic features selected via SFS, considering XGB model for each wavelet kernel considered.
Table A1. Complete list of radiomic features selected via SFS, considering XGB model for each wavelet kernel considered.
WaveletRadiomicFeatureWavelet Kernel
DecompositionCategoryNameBior1.5Coif1Db3DmeyHaarRbio1.5Sym2
LLFO10PercentileXX XXXX
LLFO90PercentileXX XXX
LLFOKurtosisX XXXX
LLFOMinimumX XX
LLFORangeXXXXXXX
LLFOSkewnessXXXXXXX
LLGLRLMGrayLevelNonUniformity X X
LLGLSZMHighGrayLevelZoneEmphasis X XXX
LLGLSZMSmallAreaHighGrayLevelEmphasis XX XXX
LHFOEnergyXX X X
LHFOKurtosisX
LHFOSkewness X X X
LHGLRLMLongRunEmphasis X X
LHGLSZMGrayLevelNonUniformity X
LHGLSZMHighGrayLevelZoneEmphasisX X
LHGLSZMLargeAreaEmphasis X
LHGLSZMSizeZoneNonUniformity
LHGLSZMSmallAreaHighGrayLevelEmphasis
LHGLSZMZoneEntropy X X X
LHGLDMDependenceNonUniformityXXX X X
LHGLDMDependenceVariance X
LHGLDMLargeDependenceEmphasis X XXX
LHGLDMLargeDependenceHighGrayLevelEmphasis
HLFOSkewness X X
HLFOMaximum X X
HLGLRLMLongRunEmphasis X
HLGLSZMHighGrayLevelZoneEmphasisX
HLGLSZMLargeAreaEmphasisX X
HLGLSZMSizeZoneNonUniformity X
HLGLSZMSmallAreaHighGrayLevelEmphasis X
HLGLSZMZoneEntropyXXX XXX
HLGLDMDependenceVariance X
HHFOMinimum X X
HHFOSkewness
HHFORange X
HHGLRLMLongRunEmphasis X
HHGLRLMLongRunHighGrayLevelEmphasis X
HHGLSZMGrayLevelNonUniformityX X
HHGLSZMHighGrayLevelZoneEmphasisX X
HHGLSZMSizeZoneNonUniformity X X
HHGLSZMZoneEntropyXX
HHGLDMLargeDependenceHighGrayLevelEmphasis X
TOTAL SELECTED FEATURES17151014191316
Table A2. Complete list of radiomic features selected via SFS, considering SVM model for each wavelet kernel considered.
Table A2. Complete list of radiomic features selected via SFS, considering SVM model for each wavelet kernel considered.
Wavelet
Decomposition
Radiomic
Category
Feature
Name
Wavelet Kernel
Bior1.5Coif1Db3DmeyHaarRbio1.5Sym2
LLFO10PercentileX XXX
LLFO90PercentileX X
LLFOKurtosisXX X X
LLFOMinimum XX X
LLFORange XXXX
LLFOSkewnessXXXXXXX
LLGLRLMGrayLevelNonUniformity X
LLGLSZMHighGrayLevelZoneEmphasisXXXXXX
LLGLSZMSmallAreaHighGrayLevelEmphasis X XX
LHFOEnergy X
LHFOKurtosisX X
LHFOSkewnessX X X
LHGLRLMLongRunEmphasis X X
LHGLSZMGrayLevelNonUniformity X
LHGLSZMHighGrayLevelZoneEmphasis
LHGLSZMLargeAreaEmphasis X
LHGLSZMSizeZoneNonUniformity
LHGLSZMSmallAreaHighGrayLevelEmphasis
LHGLSZMZoneEntropy
LHGLDMDependenceNonUniformity X
LHGLDMDependenceVariance X X
LHGLDMLargeDependenceEmphasis XXXX
LHGLDMLargeDependenceHighGrayLevelEmphasis X
HLFOKurtosisX X
HLFOSkewness X X
HLFOMaximum X X
HLGLRLMLongRunEmphasis XX
HLGLSZMHighGrayLevelZoneEmphasisX
HLGLSZMLargeAreaEmphasisX X
HLGLSZMSizeZoneNonUniformity X
HLGLSZMSmallAreaHighGrayLevelEmphasis
HLGLSZMZoneEntropyX XXX
HLGLDMDependenceVariance X
HHFOSkewness X
HHFORange X
HHGLRLMLongRunEmphasis X
HHGLRLMLongRunHighGrayLevelEmphasis XX
HHGLSZMGrayLevelNonUniformityX X
HHGLSZMHighGrayLevelZoneEmphasisX X
HHGLSZMSizeZoneNonUniformity X
HHGLSZMZoneEntropyX X
HHGLDMLargeDependenceHighGrayLevelEmphasis X
TOTAL SELECTED FEATURES149101515138
Table A3. Complete list of radiomic features selected via SFS, considering RF model for each wavelet kernel considered.
Table A3. Complete list of radiomic features selected via SFS, considering RF model for each wavelet kernel considered.
Wavelet
Decomposition
Radiomic
Category
Feature
Name
Wavelet Kernel
Bior1.5Coif1Db3DmeyHaarRbio1.5Sym2
LLFO10Percentile XXX X
LLFO90Percentile XX X
LLFOKurtosisXXX XXX
LLFOMinimumX XXXXX
LLFORangeX X X
LLFOSkewnessXXXXXXX
LLGLRLMGrayLevelNonUniformity X X
LLGLSZMHighGrayLevelZoneEmphasisXXXX X
LLGLSZMSmallAreaHighGrayLevelEmphasisX XX
LHFOEnergy X
LHFOKurtosisX X
LHFOSkewnessXX X
LHGLRLMLongRunEmphasis X
LHGLSZMGrayLevelNonUniformity
LHGLSZMHighGrayLevelZoneEmphasisX XX
LHGLSZMLargeAreaEmphasis
LHGLSZMSizeZoneNonUniformity
LHGLSZMSmallAreaHighGrayLevelEmphasis
LHGLSZMZoneEntropy X X
LHGLDMDependenceNonUniformity X X
LHGLDMDependenceVariance X X
LHGLDMLargeDependenceEmphasisX XXXXX
LHGLDMLargeDependenceHighGrayLevelEmphasis
HLFOKurtosisX X
HLFOSkewness X X
HLFOMaximum X X
HLGLRLMLongRunEmphasis
HLGLSZMHighGrayLevelZoneEmphasisX
HLGLSZMLargeAreaEmphasisX
HLGLSZMSizeZoneNonUniformity X
HLGLSZMSmallAreaHighGrayLevelEmphasis X
HLGLSZMZoneEntropyXXX XXX
HLGLDMDependenceVariance
HHFOMinimum X X
HHFOSkewness X
HHFORange X
HHGLRLMLongRunEmphasis X
HHGLRLMLongRunHighGrayLevelEmphasis XX
HHGLSZMGrayLevelNonUniformity X X
HHGLSZMHighGrayLevelZoneEmphasis X
HHGLSZMSizeZoneNonUniformity X
HHGLSZMZoneEntropyXX X
HHGLDMLargeDependenceHighGrayLevelEmphasis
TOTAL SELECTED FEATURES15131312161012

References

  1. Gillies, R.; Kinahan, P.; Hricak, H. Radiomics: Images Are More than Pictures, They Are Data. Radiology 2016, 278, 563–577. [Google Scholar] [CrossRef] [Green Version]
  2. Papanikolaou, N.; Matos, C.; Koh, D. How to develop a meaningful radiomic signature for clinical use in oncologic patients. Cancer Imaging 2020, 20, 33. [Google Scholar] [CrossRef]
  3. Zwanenburg, A.; Vallières, M.; Abdalah, M.A.; Aerts, H.J.W.L.; Andrearczyk, V.; Apte, A.; Ashrafinia, S.; Bakas, S.; Beukinga, R.J.; Boellaard, R.; et al. The Image Biomarker Standardization Initiative: Standardized Quantitative Radiomics for High-Throughput Image-based Phenotyping. Radiology 2020, 295, 328–338. [Google Scholar] [CrossRef] [Green Version]
  4. Scalco, E.; Belfatto, A.; Mastropietro, A.; Rancati, T.; Avuzzi, B.; Messina, A.; Valdagni, R.; Rizzo, G. T2w-MRI signal normalization affects radiomics features reproducibility. Med. Phys. 2020, 47, 1680–1691. [Google Scholar] [CrossRef] [PubMed]
  5. Militello, C.; Rundo, L.; Dimarco, M.; Orlando, A.; D’Angelo, I.; Conti, V.; Bartolotta, T.V. Robustness Analysis of DCE-MRI-Derived Radiomic Features in Breast Masses: Assessing Quantization Levels and Segmentation Agreement. Appl. Sci. 2022, 12, 5512. [Google Scholar] [CrossRef]
  6. Saltybaeva, N.; Tanadini-Lang, S.; Vuong, D.; Burgermeister, S.; Mayinger, M.; Bink, A.; Andratschke, N.; Guckenberger, M.; Bogowicz, M. Robustness of radiomic features in magnetic resonance imaging for patients with glioblastoma: Multi-center study. Phys. Imaging Radiat. Oncol. 2022, 22, 131–136. [Google Scholar] [CrossRef]
  7. Jing, R.; Wang, J.; Li, J.; Wang, X.; Li, B.; Xue, F.; Shao, G.; Xue, H. A wavelet features derived radiomics nomogram for prediction of malignant and benign early-stage lung nodules. Sci. Rep. 2021, 11, 22330. [Google Scholar] [CrossRef] [PubMed]
  8. Zhou, J.; Lu, J.; Gao, C.; Zeng, J.; Zhou, C.; Lai, X.; Cai, W.; Xu, M. Predicting the response to neoadjuvant chemotherapy for breast cancer: Wavelet transforming radiomics in MRI. BMC Cancer 2020, 20, 100. [Google Scholar] [CrossRef] [Green Version]
  9. Hou, Z.; Yang, Y.; Li, S.; Yan, J.; Ren, W.; Liu, J.; Wang, K.; Liu, B.; Wan, S. Radiomic analysis using contrast-enhanced CT: Predict treatment response to pulsed low dose rate radiotherapy in gastric carcinoma with abdominal cavity metastasis. Quant. Imaging Med. Surg. 2018, 8, 410. [Google Scholar] [CrossRef] [PubMed]
  10. Kotowski, K.; Kucharski, D.; Machura, B.; Adamski, S.; Becker, B.G.; Krason, A.; Zarudzki, L.; Tessier, J.; Nalepa, J. Detecting liver cirrhosis in computed tomography scans using clinically-inspired and radiomic features. Comput. Biol. Med. 2022, 152, 106378. [Google Scholar] [CrossRef]
  11. Bijari, S.; Jahanbakhshi, A.; Hajishafiezahramini, P.; Abdolmaleki, P. Differentiating Glioblastoma Multiforme from Brain Metastases Using Multidimensional Radiomics Features Derived from MRI and Multiple Machine Learning Models. BioMed Res. Int. 2022, 2022, 2016006. [Google Scholar] [CrossRef] [PubMed]
  12. Jiang, Z.; Yin, J.; Han, P.; Chen, N.; Kang, Q.; Qiu, Y.; Li, Y.; Lao, Q.; Sun, M.; Yang, D.; et al. Wavelet transformation can enhance computed tomography texture features: A multicenter radiomics study for grade assessment of COVID-19 pulmonary lesions. Quant. Imaging Med. Surg. 2022, 12, 4758–4770. [Google Scholar] [CrossRef] [PubMed]
  13. Keogh, E.; Mueen, A. Curse of Dimensionality; Springer: New York, NY, USA, 2017; pp. 314–315. [Google Scholar] [CrossRef]
  14. Soufi, M.; Arimura, H.; Nagami, N. Identification of optimal mother wavelets in survival prediction of lung cancer patients using wavelet decomposition-based radiomic features. Med. Phys. 2018, 45, 5116–5128. [Google Scholar] [CrossRef] [PubMed]
  15. Cheng, Z.; Huang, Y.; Huang, X.; Wu, X.; Liang, C.; Liu, Z. Effects of different wavelet filters on correlation and diagnostic performance of radiomics features. Zhong Nan Da Xue Xue Bao. Yi Xue Ban = J. Cent. South Univ. Med. Sci. 2019, 44, 244–250. [Google Scholar] [CrossRef]
  16. Soda, P.; D’Amico, N.C.; Tessadori, J.; Valbusa, G.; Guarrasi, V.; Bortolotto, C.; Akbar, M.U.; Sicilia, R.; Cordelli, E.; Fazzini, D.; et al. AIforCOVID: Predicting the clinical outcomes in patients with COVID-19 applying AI to chest-X-rays. An Italian multicentre study. Med. Image Anal. 2021, 74, 102216. [Google Scholar] [CrossRef]
  17. Hackathon Website. COVID CXR Hackathon Competition. 2022. Available online: https://ai4covid-hackathon.it/ (accessed on 5 January 2023).
  18. Ravichandran, D.; Nimmatoori, R.; Gulam Ahamad, M. Mathematical representations of 1D, 2D and 3D wavelet transform for image coding. Int. J. Adv. Comput. Theory Eng. 2016, 5, 20–27. [Google Scholar]
  19. Mallat, S.G. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 674–693. [Google Scholar] [CrossRef] [Green Version]
  20. Al Jumah, A. Denoising of an image using discrete stationary wavelet transform and various thresholding techniques. J. Signal Inf. Process. 2013, 4, 28281. [Google Scholar] [CrossRef] [Green Version]
  21. Dautov, Ç.P.; Özerdem, M.S. Wavelet transform and signal denoising using Wavelet method. In Proceedings of the 2018 26th Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar] [CrossRef]
  22. Boix, M.; Canto, B. Wavelet Transform application to the compression of images. Math. Comput. Model. 2010, 52, 1265–1270. [Google Scholar] [CrossRef]
  23. Prasad, P.; Prasad, D.; Rao, G.S. Performance analysis of orthogonal and biorthogonal wavelets for edge detection of X-ray images. Procedia Comput. Sci. 2016, 87, 116–121. [Google Scholar] [CrossRef] [Green Version]
  24. Lee, G.; Gommers, R.; Waselewski, F.; Wohlfahrt, K.; O’Leary, A. PyWavelets: A Python package for wavelet analysis. J. Open Source Softw. 2019, 4, 1237. [Google Scholar] [CrossRef]
  25. Pragada, S.; Sivaswamy, J. Image denoising using matched biorthogonal wavelets. In Proceedings of the 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, Bhubaneswar, India, 16–19 December 2008; pp. 25–32. [Google Scholar] [CrossRef]
  26. Abidin, Z.Z.; Manaf, M.; Shibhgatullah, A.S. Experimental approach on thresholding using reverse biorthogonal wavelet decomposition for eye image. In Proceedings of the 2013 IEEE International Conference on Signal and Image Processing Applications, Melaka, Malaysia, 8–10 October 2013; pp. 349–353. [Google Scholar] [CrossRef]
  27. Tilak, T.N.; Krishnakumar, S. Reverse Biorthogonal Spline Wavelets in Undecimated Transform for Image Denoising. Int. J. Comput. Sci. Eng. 2018, 6, 66–72. [Google Scholar] [CrossRef]
  28. Rohima; Barkah Akbar, M. Wavelet Analysis and Comparison from Coiflet Family on Image Compression. In Proceedings of the 2020 8th International Conference on Cyber and IT Service Management (CITSM), Pangkal, Indonesia, 23–24 October 2020; pp. 1–5. [Google Scholar] [CrossRef]
  29. Karim, S.A.A.; Hasan, M.K.; Sulaiman, J.; Janier, J.B.; Ismail, M.T.; Muthuvalu, M.S. Denoising solar radiation data using coiflet wavelets. In Proceedings of the AIP Conference Proceedings, Kuala Lumpur, Malaysia, 3–5 June 2014; American Institute of Physics: College Park, MA, USA, 2014; Volume 1621, pp. 394–401. [Google Scholar] [CrossRef]
  30. Wahid, K. Low complexity implementation of daubechies wavelets for medical imaging applications. In Discrete Wavelet Transforms-Algorithms and Applications; IntechOpen: London, UK, 2011. [Google Scholar] [CrossRef] [Green Version]
  31. Meyer, Y. Wavelets and Operators: Volume 1; Number 37 in Cambridge Studies in Advance Mathematics; Cambridge University Press: Cambridge, UK, 1992. [Google Scholar] [CrossRef]
  32. Wu, M.T. Wavelet transform based on Meyer algorithm for image edge and blocking artifact reduction. Inf. Sci. 2019, 474, 125–135. [Google Scholar] [CrossRef]
  33. Porwik, P.; Lisowska, A. The Haar-wavelet transform in digital image processing: Its status and achievements. Mach. Graph. Vis. 2004, 13, 79–98. [Google Scholar] [CrossRef]
  34. Bhardwaj, J.; Nayak, A. Haar wavelet transform–based optimal Bayesian method for medical image fusion. Med. Biol. Eng. Comput. 2020, 58, 2397–2411. [Google Scholar] [CrossRef] [PubMed]
  35. Narula, S.; Gupta, S. Image Compression Radiography using HAAR Wavelet Transform. Int. J. Comput. Appl. 2015, 975, 8887. [Google Scholar] [CrossRef]
  36. Wang, J.; Huang, K. Medical image compression by using three-dimensional wavelet transformation. IEEE Trans. Med. Imaging 1996, 15, 547–554. [Google Scholar] [CrossRef]
  37. Arfaoui, S.; Mabrouk, A.B.; Cattani, C. Wavelet Analysis: Basic Concepts and Applications; Chapman and Hall/CRC: London, UK, 2021. [Google Scholar] [CrossRef]
  38. Gao, R.X.; Yan, R. Wavelets: Theory and Applications for Manufacturing; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2010. [Google Scholar] [CrossRef]
  39. van Griethuysen, J.J.M.; Fedorov, A.; Parmar, C.; Hosny, A.; Aucoin, N.; Narayan, V.; Beets-Tan, R.G.H.; Fillion-Robin, J.C.; Pieper, S.; Aerts, H.J.W.L. Computational Radiomics System to Decode the Radiographic Phenotype. Cancer Res. 2017, 77, e104–e107. [Google Scholar] [CrossRef] [Green Version]
  40. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man, Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  41. Haralick, R.M. Statistical and structural approaches to texture. Proc. IEEE 1979, 67, 786–804. [Google Scholar] [CrossRef]
  42. Galloway, M.M. Texture analysis using gray level run lengths. Comput. Graph. Image Process. 1975, 4, 172–179. [Google Scholar] [CrossRef]
  43. Thibault, G.; Angulo, J.; Meyer, F. Advanced Statistical Matrices for Texture Characterization: Application to Cell Classification. IEEE Trans. Biomed. Eng. 2014, 61, 630–637. [Google Scholar] [CrossRef]
  44. Sun, C.; Wee, W.G. Neighboring gray level dependence matrix for texture classification. Comput. Vision Graph. Image Process. 1983, 23, 341–352. [Google Scholar] [CrossRef]
  45. Amadasun, M.; King, R. Textural features corresponding to textural properties. IEEE Trans. Syst. Man Cybern. 1989, 19, 1264–1274. [Google Scholar] [CrossRef]
  46. Leger, S.; Zwanenburg, A.; Pilz, K.; Lohaus, F.; Linge, A.; Zöphel, K.; Kotzerke, J.; Schreiber, A.; Tinhofer, I.; Budach, V.; et al. A comparative study of machine learning methods for time-to-event survival data for radiomics risk modelling. Sci. Rep. 2017, 7, 13206. [Google Scholar] [CrossRef] [Green Version]
  47. Niu, Q.; Jiang, X.; Li, Q.; Zheng, Z.; Du, H.; Wu, S.; Zhang, X. Texture features and pharmacokinetic parameters in differentiating benign and malignant breast lesions by dynamic contrast enhanced magnetic resonance imaging. Oncol. Lett. 2018, 16, 4607–4613. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Oikonomou, E.K.; Williams, M.C.; Kotanidis, C.P.; Desai, M.Y.; Marwan, M.; Antonopoulos, A.S.; Thomas, K.E.; Thomas, S.; Akoumianakis, I.; Fan, L.M.; et al. A novel machine learning-derived radiotranscriptomic signature of perivascular fat improves cardiac risk prediction using coronary CT angiography. Eur. Heart J. 2019, 40, 3529–3543. [Google Scholar] [CrossRef] [Green Version]
  49. Zhang, Q.; Peng, Y.; Liu, W.; Bai, J.; Zheng, J.; Yang, X.; Zhou, L. Radiomics based on multimodal MRI for the differential diagnosis of benign and malignant breast lesions. J. Magn. Reson. Imaging 2020, 52, 596–607. [Google Scholar] [CrossRef] [PubMed]
  50. Raschka, S. MLxtend: Providing machine learning and data science utilities and extensions to Python’s scientific computing stack. J. Open Source Softw. 2018, 3, 638. [Google Scholar] [CrossRef]
  51. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 27. [Google Scholar] [CrossRef]
  52. Su, J.Q.; Liu, J.S. Linear Combinations of Multiple Diagnostic Markers. J. Am. Stat. Assoc. 1993, 88, 1350–1355. [Google Scholar] [CrossRef]
  53. Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 55, 3503–3568. [Google Scholar] [CrossRef]
  54. Combi, C.; Amico, B.; Bellazzi, R.; Holzinger, A.; Moore, J.H.; Zitnik, M.; Holmes, J.H. A manifesto on explainability for artificial intelligence in medicine. Artif. Intell. Med. 2022, 133, 102423. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Two examples of segmentation obtained by means of the semi-automatic tool able to assist clinician to detect the elliptical ROIs (highlighted in red) within lungs: in the upper row a MILD sample; in the lower row a SEVERE sample.
Figure 1. Two examples of segmentation obtained by means of the semi-automatic tool able to assist clinician to detect the elliptical ROIs (highlighted in red) within lungs: in the upper row a MILD sample; in the lower row a SEVERE sample.
Jimaging 09 00032 g001
Figure 2. Upper: Selected features after near-zero variance analysis (in blue), correlation analysis (in orange), and statistical analysis (in gray), respectively. Lower: Selected features considering each wavelet decomposition.
Figure 2. Upper: Selected features after near-zero variance analysis (in blue), correlation analysis (in orange), and statistical analysis (in gray), respectively. Lower: Selected features considering each wavelet decomposition.
Jimaging 09 00032 g002aJimaging 09 00032 g002b
Figure 3. The confusion matrices of the three ML classifiers obtained with the Haar kernel.
Figure 3. The confusion matrices of the three ML classifiers obtained with the Haar kernel.
Jimaging 09 00032 g003
Figure 4. The four decompositions ( L L , L H , H L , and H H ) calculated by means of wavelets. In the example, the Haar kernel was used.
Figure 4. The four decompositions ( L L , L H , H L , and H H ) calculated by means of wavelets. In the example, the Haar kernel was used.
Jimaging 09 00032 g004
Table 1. Number of coefficients that define the kernel length.
Table 1. Number of coefficients that define the kernel length.
Wavelet KernelCoefficients Number
Bior1.510
Coif16
Db36
Dmey62
Haar2
Rbio1.510
Sym24
Table 2. Complete lists of radiomic features remaining after the preprocessing step for each of the wavelet kernels considered.
Table 2. Complete lists of radiomic features remaining after the preprocessing step for each of the wavelet kernels considered.
WaveletRadiomicFeatureWavelet Kernel
DecompositionCategoryNameBior1.5Coif1Db3DmeyHaarRbio1.5Sym2
LLFO10PercentileXXXXXXX
LLFO90PercentileXXXXXXX
LLFOKurtosisXXXXXXX
LLFOMinimumXXXXXXX
LLFORangeXXXXXXX
LLFOSkewnessXXXXXXX
LLGLRLMGrayLevelNonUniformity XX XXX
LLGLSZMHighGrayLevelZoneEmphasisXXXXXXX
LLGLSZMSmallAreaHighGrayLevelEmphasisXXX XXX
DecompositionCategoryNameBior1.5Coif1Db3DmeyHaarRbio1.5Sym2
LHFOEnergyXX X X
LHFOKurtosisX X
LHFOSkewnessXX X X
LHGLRLMLongRunEmphasis XX X
LHGLSZMGrayLevelNonUniformity X
LHGLSZMHighGrayLevelZoneEmphasisX XX
LHGLSZMLargeAreaEmphasis X
LHGLSZMSizeZoneNonUniformity X
LHGLSZMSmallAreaHighGrayLevelEmphasis X
LHGLSZMZoneEntropyXXXXX X
LHGLDMDependenceNonUniformityXXXXXXX
LHGLDMDependenceVariance X X
LHGLDMLargeDependenceEmphasisXXXXXXX
LHGLDMLargeDependenceHighGrayLevelEmphasis X
HLFOKurtosisX X
HLFOSkewness X X
HLFOMaximum X X
HLGLRLMLongRunEmphasis XX X
HLGLSZMHighGrayLevelZoneEmphasisX
HLGLSZMLargeAreaEmphasisX X
HLGLSZMSizeZoneNonUniformity X
HLGLSZMSmallAreaHighGrayLevelEmphasis X
HLGLSZMZoneEntropyXXXXXXX
HLGLDMDependenceVariance X
HHFOMinimum X X
HHFOSkewness X
HHFORange X
HHGLRLMLongRunEmphasis X
HHGLRLMLongRunHighGrayLevelEmphasis XX
HHGLSZMGrayLevelNonUniformityX XXXX
HHGLSZMHighGrayLevelZoneEmphasisX X
HHGLSZMSizeZoneNonUniformity X X
HHGLSZMZoneEntropyXX X
HHGLDMLargeDependenceHighGrayLevelEmphasis X
TOTAL SELECTED FEATURES22211726221720
Table 3. Features selected using the SFS, for each wavelet kernel with the ML models considered.
Table 3. Features selected using the SFS, for each wavelet kernel with the ML models considered.
Wavelet KernelInitial Features
(After Preprocessing)
Machine Learning Model
XGBSVMRF
Bior1.522171415
Coif12115913
Db317101013
Dmey26141512
Haar22191516
Rbio1.517131310
Sym22016812
no wavelet11899
Table 4. Accuracy values obtained in training (mean ± standard deviation) and in testing phases for each wavelet kernel, with three ML models considered. Boldface values highlight the best three obtained results.
Table 4. Accuracy values obtained in training (mean ± standard deviation) and in testing phases for each wavelet kernel, with three ML models considered. Boldface values highlight the best three obtained results.
Wavelet KernelMachine Learning Model
XGBSVMRF
TrainTestTrainTestTrainTest
Bior1.5 0.633 ± 0.041 0.604 0.671 ± 0.041 0 . 641 0.661 ± 0.043 0 . 646
Coif1 0.635 ± 0.042 0 . 619 0.671 ± 0.043 0.627 0.662 ± 0.039 0.631
Db3 0.629 ± 0.046 0.541 0.655 ± 0.045 0.641 0.641 ± 0.040 0.594
Dmey 0.633 ± 0.047 0.555 0.654 ± 0.044 0.578 0.624 ± 0.043 0.592
Haar 0.654 ± 0.045 0 . 619 0.683 ± 0.046 0 . 673 0.674 ± 0.044 0 . 646
Rbio1.5 0.627 ± 0.047 0.586 0.646 ± 0.047 0.606 0.647 ± 0.045 0.600
Sym2 0.644 ± 0.040 0 . 611 0.672 ± 0.041 0 . 644 0.650 ± 0.044 0 . 650
no wavelet 0.611 ± 0.047 0.567 0.636 ± 0.042 0.619 0.630 ± 0.046 0.594
Table 5. Sensitivity values obtained in training (mean ± standard deviation) and in testing phases for each wavelet kernel, with three ML models considered. Boldface values highlight the best three obtained results.
Table 5. Sensitivity values obtained in training (mean ± standard deviation) and in testing phases for each wavelet kernel, with three ML models considered. Boldface values highlight the best three obtained results.
Wavelet KernelMachine Learning Model
XGBSVMRF
TrainTestTrainTestTrainTest
Bior1.5 0.647 ± 0.060 0.588 0.660 ± 0.060 0 . 633 0.662 ± 0.066 0 . 627
Coif1 0.646 ± 0.064 0 . 622 0.671 ± 0.066 0 . 627 0.655 ± 0.062 0 . 616
Db3 0.639 ± 0.070 0.577 0.643 ± 0.058 0.683 0.642 ± 0.059 0.627
Dmey 0.646 ± 0.070 0.638 0.649 ± 0.064 0.722 0.623 ± 0.063 0.738
Haar 0.661 ± 0.067 0 . 639 0.660 ± 0.070 0 . 628 0.683 ± 0.062 0 . 644
Rbio1.5 0.635 ± 0.062 0.550 0.649 ± 0.064 0.638 0.637 ± 0.064 0.616
Sym2 0.651 ± 0.061 0 . 644 0.649 ± 0.063 0.594 0.652 ± 0.063 0.611
no wavelet 0.619 ± 0.066 0.655 0.614 ± 0.065 0.683 0.621 ± 0.068 0.622
Table 6. Specificity values obtained in training (mean ± standard deviation) and in testing phases for each wavelet kernel, with three ML models considered. Boldface values highlight the best three obtained results.
Table 6. Specificity values obtained in training (mean ± standard deviation) and in testing phases for each wavelet kernel, with three ML models considered. Boldface values highlight the best three obtained results.
Wavelet KernelMachine Learning Model
XGBSVMRF
TrainTestTrainTestTrainTest
Bior1.5 0.618 ± 0.061 0 . 614 0.683 ± 0.062 0 . 647 0.661 ± 0.061 0 . 656
Coif1 0.622 ± 0.069 0 . 617 0.671 ± 0.056 0.627 0.669 ± 0.063 0 . 640
Db3 0.618 ± 0.061 0.519 0.667 ± 0.065 0.617 0.641 ± 0.064 0.575
Dmey 0.618 ± 0.064 0.506 0.658 ± 0.062 0.493 0.625 ± 0.063 0.506
Haar 0.646 ± 0.067 0 . 608 0.709 ± 0.065 0 . 699 0.665 ± 0.066 0 . 647
Rbio1.5 0.618 ± 0.065 0.607 0.643 ± 0.066 0.588 0.657 ± 0.060 0.591
Sym2 0.636 ± 0.056 0.591 0.697 ± 0.059 0 . 673 0.648 ± 0.068 0.637
no wavelet 0.601 ± 0.067 0.516 0.660 ± 0.064 0.581 0.640 ± 0.063 0.578
Table 7. AUROC values obtained in training (mean ± standard deviation) and in testing phases for each wavelet kernel with three ML models considered. Boldface values highlight the best three obtained results.
Table 7. AUROC values obtained in training (mean ± standard deviation) and in testing phases for each wavelet kernel with three ML models considered. Boldface values highlight the best three obtained results.
Wavelet KernelMachine Learning Model
XGBSVMRF
TrainTestTrainTestTrainTest
Bior1.5 0.685 ± 0.045 0 . 652 0.725 ± 0.044 0 . 689 0.711 ± 0.047 0 . 706
Coif1 0.681 ± 0.046 0 . 655 0.710 ± 0.046 0.670 0.708 ± 0.044 0.679
Db3 0.681 ± 0.050 0.593 0.708 ± 0.051 0.676 0.690 ± 0.044 0.653
Dmey 0.684 ± 0.052 0.611 0.700 ± 0.049 0.650 0.678 ± 0.047 0.662
Haar 0.710 ± 0.048 0.636 0.734 ± 0.047 0 . 677 0.726 ± 0.046 0 . 686
Rbio1.5 0.674 ± 0.049 0.623 0.700 ± 0.050 0.649 0.697 ± 0.047 0.649
Sym2 0.694 ± 0.042 0 . 678 0.718 ± 0.044 0 . 671 0.704 ± 0.047 0 . 689
no wavelet 0.651 ± 0.054 0.602 0.690 ± 0.046 0.629 0.677 ± 0.050 0.635
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Prinzi, F.; Militello, C.; Conti, V.; Vitabile, S. Impact of Wavelet Kernels on Predictive Capability of Radiomic Features: A Case Study on COVID-19 Chest X-ray Images. J. Imaging 2023, 9, 32. https://doi.org/10.3390/jimaging9020032

AMA Style

Prinzi F, Militello C, Conti V, Vitabile S. Impact of Wavelet Kernels on Predictive Capability of Radiomic Features: A Case Study on COVID-19 Chest X-ray Images. Journal of Imaging. 2023; 9(2):32. https://doi.org/10.3390/jimaging9020032

Chicago/Turabian Style

Prinzi, Francesco, Carmelo Militello, Vincenzo Conti, and Salvatore Vitabile. 2023. "Impact of Wavelet Kernels on Predictive Capability of Radiomic Features: A Case Study on COVID-19 Chest X-ray Images" Journal of Imaging 9, no. 2: 32. https://doi.org/10.3390/jimaging9020032

APA Style

Prinzi, F., Militello, C., Conti, V., & Vitabile, S. (2023). Impact of Wavelet Kernels on Predictive Capability of Radiomic Features: A Case Study on COVID-19 Chest X-ray Images. Journal of Imaging, 9(2), 32. https://doi.org/10.3390/jimaging9020032

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop