Next Article in Journal
Impact of Soil Properties’ Spatial Correlation Lengths and Inclination on Permanent Slope Displacements Due to Earthquake Excitation
Previous Article in Journal
A Novel Chaotic Elite Adaptive Genetic Algorithm for Task Allocation of Intelligent Unmanned Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Textural Analysis Supports Prostate MR Diagnosis in PIRADS Protocol

1
Urology Department, Ultragen Medical Center, 31-572 Krakow, Poland
2
Department of Diagnostic Imaging, Jagiellonian University Medical College, 31-501 Krakow, Poland
3
Faculty of Geology, Geophysics and Environmental Protection, AGH University of Science and Technology, 30-059 Krakow, Poland
4
Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, 30-059 Krakow, Poland
5
Department of Algorithmics and Software, Silesian University of Technology, 44-100 Gliwice, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9871; https://doi.org/10.3390/app13179871
Submission received: 31 July 2023 / Revised: 27 August 2023 / Accepted: 30 August 2023 / Published: 31 August 2023

Abstract

:
Prostate cancer is one of the most common cancers in the world. Due to the ageing of society and the extended life of the population, early diagnosis is a great challenge for healthcare. Unfortunately, the currently available diagnostic methods, in which magnetic resonance imaging (MRI) using the PIRADS protocol plays an increasingly important role, are imperfect, mostly in the inability to visualise small cancer foci and misinterpretation of the imagery data. Therefore, there is a great need to improve the methods currently applied and look for even better ones for the early detection of prostate cancer. In the presented research, anonymised MRI scans of 92 patients with evaluation in the PIRADS protocol were selected from the data routinely scanned for prostate cancer. Suspicious tissues were depicted manually under medical supervision. The texture features in the marked regions were calculated using the qMaZda software. The multiple-instance learning approach based on the SVM classifier allowed recognising between healthy and ill prostate tissue. The best F1 score equal to 0.77 with a very high recall equal to 0.70 and precision equal to 0.85 was recorded for the texture features describing the central zone. The research showed that the use of texture analysis in prostate MRI may allow for automation of the assessment of PIRADS scores.

Graphical Abstract

1. Introduction

Prostate cancer (PCa) is the second most commonly diagnosed cancer and the second leading cause of cancer death among males worldwide [1]. Few well-proven risk factors for prostate cancer have been found, including age, race, and a positive family history of prostate cancer [2]. At the age of 30, the incidence of prostate cancer is estimated to be found in a small percentage of the male population, but it increases significantly for patients in their 50s [3]. Considering ethnic origin, the highest risk of prostate cancer is found in the male population in Europe and North America and the lowest in Asia and Africa [1]. Finally, prostate cancer incidents in close male relatives such as brothers and/or male predecessors increases the risk of the disease several times when compared to the general population [4]. Moreover, knowing the family history of the disease allows a faster determination of the cancer genetic type, which may result in increased morbidity [5] and, thus, influence the choice of necessary treatment. Although a significant influence of genetic factors has been demonstrated on the incidence of prostate cancer, there are still limited data, indicating a worse course of the disease in these familial cases, which questions the need for aggressive screening in these groups of patients [4,6].
Attention should be paid to patients from the previously mentioned risk groups. A proper path of basic diagnostic methods that allows the early determination of prostate cancer is crucial. Currently, the level of prostate specific antigen (PSA) in blood is determined and a digital rectal examination is suggested. However, the diagnostic value of physical examination alone is low [7]. In addition, due to the imperfection of PSA blood concentration, which results in difficulties in introducing this marker to the general population, more accurate methods of early diagnosis of prostate cancer are sought.
In analogy to other types of cancer where imaging is used as a screening, there is room for magnetic resonance imaging (MRI) to be introduced to a larger group of patients. Perhaps in a simplified form, for example, as noncontrast biparametric MRI (nbMRI), T2-weighted image (TW2-MRI), and diffusion-weighted imaging (DWI) [8,9]. Performing MRI as a screening test results in higher detection of clinically significant cancer compared to PSA testing. Another modality considered may be the ultrasonography. Although the superiority of ultrasound over PSA has not been proven. Newer protocols, for example, elastography, require evaluation of effectiveness as a test more accessible than MRI [10].
Taking into account the high degree of genetic transmission, work is underway to better understand the molecular origin and search for genes responsible for the development of various types of prostate cancer [11,12,13]. Among the thousands of candidate genes, the development of prostate cancer at an early age and the worse course of the disease have been shown in the case of breast cancer gene mutations [14,15]. Finding gene mutations in the group of patients with an aggressive course of prostate cancer allows the screening of family members, allowing early diagnosis and effective treatment [16].
PIRADS is an acronym for the Prostate Imaging Reporting and Data System. It was introduced in 2012 in a consensus document summarizing the methods of interpretation of the sequences used for prostate imaging and the assessment of the probability of tumour presence [17,18]. After two years of practical experience with this approach, PIRADS 2.0 was introduced and it contained changes that allowed providing a more accurate data interpretation process [19,20]. However, the observation of interpretational differences between different teams, and also technical development, led to the introduction of PIRADS 2.1 in 2018 [21,22]. This approach uses a five-point assessment scale that indicates the probability that multi-parameter MRI (mp-MRI) findings correlate with the presence of clinically significant PCa in a particular anatomic location. The following assessment categories are defined:
  • Very low (highly unlikely present clinically significant PCa);
  • Low (unlikely clinically significant PCa);
  • Intermediate (equivocal presence of PCa disease);
  • High (likely present clinically significant PCa);
  • Very high (highly likely present clinically significant PCa).
All suspected intraprostatic lesions seen on mp-MRI should be assigned to their zonal location (e.g., peripheral zone, including the central zone, or transition zone) on the sector map and assigned a PIRADS general assessment category [22]. Next, selecting a parameter allows ordering of the images in the following analytic procedure.
Although MR image analysis is performed most frequently using the PIRADS scale, there are some limitations to the effectiveness of this tool. Among other features, there is a significant difference in the subjective assessment of images by independent radiologists [23,24]. Nevertheless, those discrepancies, a targeted biopsy following the PIRADS diagnosis procedure, showed prostate cancer foci diagnosed significantly more frequently. Especially for regions with tissue with increased aggressiveness measured on the scale of International Society of Urological Pathology (ISUP) [25]. This scale introduced the Gleeson grades that determine the aggressiveness of the cancer tissue. Originally, MRI scanning was used in the group of patients requiring a second biopsy. But it has also been shown to be effective in the group of patients who have undergone it for the first time, and MRI is now recommended as a necessary test before invasive diagnostics [26,27,28].
Adding MRI-targeted biopsy to systematic biopsy in biopsy-naïve patients increases the number of PCa detected. The detection of tissue with Gleeson grade > 2 and grade > 3 increases approximately by 20% and 30%, respectively. In the repeated biopsy setting, the addition of MRI-targeted biopsy increases PCa detection by approximately 40% (Gleeson grade > 2) and 50% (grade > 3). Over the years, substantial scientific work has been performed to reduce interpretation bias. According to different studies, the overall performance of MRI PCa detection is high, with a sensitivity of 0.89 and a specificity of 0.73 [29].
The performance of PIRADS and lesion detection accuracy are negatively influenced by variations in the technical equipment used, differences in the patient population, or interpretational local habits [30]. Furthermore, the biological variations of the tumours make some prostate lesions undetectable and, therefore, missed during radiological evaluation [31,32]. The subjectivity of interpretation, with the presence of local interpretational habits, limits the correct recognition of tissue types [33,34]. It is a complex process, where any discrepancies between radiologist at the level of changed tissue signal, or then the determination of the changed region shape and border, influence the outcome. Therefore, in the PIRADS-based assessment, inter- and intra-reader variation is reported [35,36]. Multireader efficacy presents moderate to good in the case of clinically insignificant PCa. Intra-reader reproducibility is not always achieved in the case of malignant prostate lesions in proven detections [37,38,39,40]. Automated segmentation can significantly reduce interpretational differences, which is especially important in a group of less experienced imaging professionals or where lesions are less visible due to technical problems (e.g., reduced image quality). The use of automated systems shows an increase in the level of agreement between the readers [41].
This study aimed to find a relationship between the prostate gland assessment performed in the PIRADS protocol and the textural analysis of MRI scans. Finding such a relation would enable the creation of an automated protocol for MRI examinations. It could improve the detection of PCa foci in the group at risk of this disease and diminish inter-reader discrepancies.

2. Materials and Methods

For this research, a database containing MRI scans gathered according to the PIRADS protocol was prepared. Then, the textural features were calculated for each selected region. We evaluated whether the peripheral and central zones of the PCa region in MRI scans influence the description capabilities of the chosen methods. We analysed textural features in pairs applying the multiple-instance learning methodology. Below, we give a detailed description of each element of the proposed pipeline.

2.1. Prostate MRI Dataset

The MRI scans of 125 patients (24–87 years old) were acquired during standard diagnostic procedures, which met the PIRADS protocol. However, for this research, only 92 patients (24–85 years old) were chosen free of artefacts. These patients did not have hip prostheses or artefacts from previous pelvic surgeries and were properly prepared for the examination (e.g., with excessive bowel content). The mean age of the patients was 60.3 ± 12.29.
For each patient, two data sequences were recorded: TW2-MRI axial sequences with 2 mm slices and a distance factor equal to 0, and DWI with the use of a single shot echo planar sequencer (EPI) mode with a value b equal to 0–800–1500 and a distance factor of 0. Additionally, T1-weighted sequences were prepared for post-contrast evaluation. Table 1 summarises the parameters used for the preparation of the TW2-MRI sequence. The 1.5 T scanner was used.
The data were anonymised, and procedures applied during data preparation complied with the Declaration of Helsinki and the Declaration of Good Clinical Practice [42]. The Local Ethics Committee supported written consent to conduct this study (No. 1072.6120.21.22, 23 February 2022). The next step was to involve the 7 radiologists with proven experience in prostate imaging in the process of PCa evaluation during the standard diagnostic process using the PIRADS protocol. The data obtained were collected as a reference for the proposed PIRADS staging. The distribution of the PIRADS values in the cohort was as follows: 6% for PIRADS = 1, 30% for PIRADS = 2, 3, 20% for PIRADS = 4, and 15% for PIRADS = 5.
Images from MRI scans were converted linearly from 12-bit into 8-bit data, where signals reflected pixel illumination values in the range of 0 to 255. Since each prostate MRI scan had around 10 projections, the total number of images used in the research consisted of 751 MR images of prostate gland lesions. Figure 1 presents the variation of the projection number on an MRI scan. Figure 2 shows a prostate MR image with a manual annotation depicting the peripheral and central prostate zone.

2.2. Texture Analysis

Texture analysis aims at determining small set of numerical values which clearly characterise the image content clearly. Those features should return similar values for images with the same texture statistics and differ significantly when the image content varies. In this work, we took advantage of the tool qMaZda [43], which allows the calculation of many textural features that exist in the image understanding domain [44]. It allowed us to determine almost 7000 features to describe annotated regions, which additionally considered image normalization and quantization. We examined three options: texture analysis was applied only to the peripheral prostate zone, only to the central zone, and to both regions treated as a whole.
The qMaZda software converted the input data into YUV colour space and worked on the illuminance Y channel only. Then, when necessary, the normalization procedure was applied. In the presented research, we calculated the final features for images that were not normalized (D), their grey levels were linearly rescaled to the range of minimal and maximal values (M), or 1st and 99th percentiles of grey-level histogram (N), or the grey levels were normalized in range <μ ± 3σ> (S), where μ is the mean illumination while σ stands for standard deviation. In many cases, before the final texture features were calculated, some indirect matrix to compactly represent the image content was created. This matrix resolution is related to the intensity range of pixel values, and the number of data samples that fill it correspond to the number of pixels in the image. Thus, for small images and a large range of pixel intensities, there were not enough data to fill the matrix. It became sparse and the textural features calculated from its content were statistically not certain. Since we did not know whether such a problem occurred, we calculated the textural features for data converted from 5 to 8 bits (hence using matrices from 32 × 32 to 256 × 256 resolution for data of the same size).
To numerically describe the image content, we analysed simple statistics of the pixel illumination values in the image. The first method calculated an image brightness histogram (Hist) from which a basic set of statistical features was determined and reflected the area, mean, variance, skewness, kurtosis, and percentile features of the signal. Next, the rapid changes in illumination were determined in a small neighbourhood. Here, we tried to decide image contrast by using various approaches to finding edges, starting by gradient map features (Grad) [45], which were transformed to a histogram described by mean, variance, skewness, kurtosis, and non-zero elements. Another approach analyses the influence of pixels places close to each other. The autoregressive model (Arm) [46] searches for the optimal solution, and 4 parameters return information about directionality of the pattern. It can show us in which direction changes are the most popular in the data. The Gabor (Gab) transform analyses the frequency components in the local neighbourhood. It is parametrised by the Gaussian envelope and orientation. thus allowing to settle various frequencies, seen as repetitions of some signal characteristics in one direction. To obtain more information about the edges, the Haar wavelets [47] were applied and the sub-bands energies became the features.
As mentioned before, the more complex approaches are characterized by the necessity of creating indirect matrices. In the case of the grey-level co-occurrence matrix (GLCM) [45], the matrix cell counts how many times a pair of pixels of similar illumination values, corresponding to its placement in the matrix, occurs. This approach allowed us to describe some bigger patterns visible in the image. It was possible to parameterise the direction (the vertical (V), horizontal (H), and two diagonals (Z,N)) in which a pair of pixels was considered. Also, this method allowed us to determine whether a similar pair of pixels lay next to each other, or in a larger distance (we evaluated distance in range from 1 to 5 pixels). Finally, from the matrix, several parameters were calculated, for example: contrast, entropy, correlation, etc. It was noticed that textures with rapid changes in illumination levels differed from those where such changes were rare. This phenomenon was analysed by grey-level run-length matrix method (GRLM) [48]. In this case, the indirect matrix was indexed by the pixel illumination value and the number of consecutive pixels of the same colour in one direction. Having many entries with a large number of consecutive pixels, it reflected the image with large objects or stripes in an analysed direction. Similarly, as in the previous case, four directions were considered. From this matrix, at least five texture features were calculated: area, short-run emphasis, long-run emphasis, grey-level non-uniformity, run-length non-uniformity.
More sophisticated approaches analyse each pixel in its small neighbourhood before determining the descriptive features but, in consequence, result in longer feature vectors (having hundreds or thousands of entries instead up to teen as was in previous methods). The local binary patterns (LBP) [49] for each pixel defines a code considering a circular neighbourhood, the size of which is parametrized with radius, while the code resolution is related to the number of sampled points on the circumferences (usually 8). The codes are gathered in a form of histogram, which becomes the feature vector. Another popular method following this way of performance is the histogram of oriented gradients (HOG) [50]. Here, the image is divided into blocks, where the gradients are calculated, and their orientations are organized in histograms. Histograms of adjoining blocks are normalized together in order to remove some illumination changes and then become a feature vector. In qMaZda, it is possible to choose from 4, 8, 16, and 32 bins. The acronyms used in this section are later used to name precisely the features used in experiments.

2.3. Multiple Instance Learning by Support Vector Machine

The purpose of this research was to verify whether it is possible to train a model that distinguishes with a high probability between prostate cancer patients and the healthy ones. As input, the TW2-MRI scans with annotations were taken. This resulted in several images describing the same patient that should be analysed together. For the region of interest, a large number of textural features was calculated. Additionally, the data were supported with PIRADS information given per patient. The PIRADS protocol took values in the range of 1 to 5. However, values PIRADS < 4 described the healthy person and, thus, all values were assigned to one class (label 0), while PIRADS ≥ 4 defined the PCa, and, also, both those values were classified together (label 1). Therefore, this problem can be treated as a binary classification. Because the data set was not balanced, there were 62 healthy patients and 30 ill ones, the option for automatic balancing (calculating results by weighting) during training was set on in the classifier.
In the dataset, as mentioned above, each patient was described with several images. However, in the case of changed tissue, it was not known whether it was visible only in one image or in more of them. Therefore, we could not train a classifier on a separate image from the series, because the change was not always visible while the PIRADS value was one for all images. That would confuse the model. Therefore, we used the multiple-instance approach to address this problem. In this case, a subject may be described by several images and it is sufficient that one of them represents the PCa tissue to state that the patient’s PIRADS should be ≥4. The model was obtained using a support vector machine (SVM) as a binary classifier. The classification was performed on each image. However, the output was constructed considering all data describing one patient. This method was implemented using ‘mil’ Python library.
Since there were 92 samples (patients), a leave-one-out (LOO) methodology was applied to assure the generality of the results obtained. This method assumes training the dataset with N-1 samples and testing with the remaining one, and then repeating this experiment as many times as there are samples, selecting each time a different sample to the test set. In consequence, we used 91 samples for training and one for testing and repeated this experiment 92 times.

3. Results

This study addressed the problem of automatic determination of whether a set of MRI scans resulting from prostate visualization presents a healthy patient or PCa. In order to verify whether such functionality can be obtained, three experiment scenarios were performed. All of them took advantage of the texture analysis of the region of interest, but first, the prostate central zone was analysed, then the peripheral zone was considered and, finally, those two regions were merged together for textural analysis. For each scan, all textural features presented in Section 2.2 were calculated applying all considered normalization and quantization approaches. Large numbers (6678 for the central zone, 4898 for peripheral, and 6718 for both regions) between feature correlations were determined, and those with repetitive information were neglected (it was around 60,000/54,000/64,000 features for the central, peripheral zone, and both regions, respectively). From the remaining features, we evaluated which pairs allowed for binary classification of the dataset. We applied the LOO methodology. Moreover, the data were imbalanced; hence, to deal with this problem, the SVM model was trained with the balanced parameter turned on; moreover, we decided to use the specificity, sensitivity, and F1-score as a metrics, as they better reflected the classifier performance in the case of unbalanced data. For SVM, the linear kernel was chosen with the regularization factor equal to 10.
Table 2, Table 3 and Table 4 gather the 10 best results recorded in each case. A model performance is presented in each row with the names of the textural features (columns Feature 1, Feature 2) used to train this model. From the outcomes, we can notice that, regardless of the considered region, it was possible to recognize between the healthy and ill prostate tissue with at least 0.70 F1-score. Using the features derived from the central zone, more precise results were achieved with the best score of F1-score equal to almost 0.77 with very high recall equal to 0.70 and precision equal to 0.85. The better performance of the texture analysis when using the central zone may be due to two factors. First, the central zone shape was circular, which made the feature calculation similar in each direction. Second, this region was larger; thus, it is easier to find some characteristic features there. The deterioration of results when both regions were treated as one using features calculated from the region boundary made the classifier less confident.
The texture feature name starts with Y, reflecting that the Y channel from the YUV colour space was used for calculation. The following letter corresponded to the data normalization technique followed by the number of bits used for quantized data representation. After this prefix, the texture analysis method acronym was used followed by the information of direction calculation and the distance feature applied (if used) and terminating with the feature name.

4. Discussion

An interesting aspect when analysing the results was the features returning the best characteristics. In all experiments, one feature was derived from the GLCM matrix, while the other was concentrating on the grey level distribution (like Hist, Arm features) or gradient orientation (Gab, Haar). It is interesting to note that the GLCM was more descriptive when the number of bits decreased (in most cases, 4 or 5 bits were used), resulting in less sparse inner matrices to calculate the features. In the case of grey-level intensity levels and gradients, mostly 8-bit data were applied. The exception was only when the whole region analyses were considered with YD5ArmTeta4 feature. It is difficult to draw any conclusions regarding the influence of data normalization on the results, as we can see features derived with all possibilities.
Figure 3 presents compactly 500 best scores achieved in each case. This scatter plots present nicely that there is a large number of textural feature pairs that allow one to create models working well. It also shows that, in some weaker cases, several models obtain comparable outcomes. However, when all those features are used together, the results do not improve. Finally, from this graphical presentation, we also see clearly that the F1-score grows proportionally to the accuracy, which was not so clear when analysing smaller sets of data in the tables. See all results in Supplementary Materials.
Previous studies indicate that computer assistance results are not worse than the methods used so far; however, it is too early to assess the full advantage of artificial intelligence in analysing MR imaging [51]. Computer-assisted prostate interpretation showed that artificial intelligence (AI) systems are capable of increasing the accuracy of MRI interpretation [51,52]. The results presented in [53] agree with previous conclusions showing a precision of 85% for PIRADS = 4. Winkel et al. [53] presented the ability of the system to increase not only the accuracy of the interpretation but also the speed of the interpretation. Next, in the systematic review prepared by Sushentsev et al. [54], the authors present a comparable performance of fully automated and AI-assisted techniques. However, in the review by Roest et al. [55], the automated system meets the performance of expert radiologists with lower sensitivity. Implementing AI models markedly increased the accuracy of prostate image interpretation. There are expectations for the interpretation of PIRADS = 3 [56]. However, in a study by Liu et al. [57], which conducted an evaluation method based on textural classification, a sensitivity of 0.85 and a specificity of 0.73 were found in demanding PIRADS = 3. On the other hand, Arslan et al. [58] reported no clear benefit from the use of deep learning software in studies carried out at different levels of experienced radiologists. Compared to similar works undertaken by Gianni [59,60], obtaining a correct classification F1-score greater than 76% is a good result for texture features attributed to different lesions assigned to PIRADS. Especially when considering the much more homogenous group of confirmed tumours in the second work. Finally, an excellent review of the application of AI techniques in classification of prostate lesions among different MRI modalities is given in [61]. The authors reported a mean sensitivity of 0.80, which is close to our results. That puts our approach in the main stream of results currently obtained with machine learning approaches.
In this research, we decided to use the approach to prostate MRI analysis as a binary classification problem, where only healthy (PIRADS < 4) and PCa (PIRADS ≥ 4) were distinguished. That allowed for task simplification and diminished the influence of class imbalance in the gathered dataset. Since it was a trial study evaluating whether texture analysis is applicable, we found this simplification justifiable. However, we are aware that in further research, all five grades from PIRADS should be addressed.
Our work was conducted using 1.5 T scanner, which might be perceived as a limitation of the study, where the PIRADS Steering Committee prefers 3 T scanners over 1.5 T. However, 1.5 T scanners are accepted because the application of new technologies allows them to provide adequate images and reliable diagnostic results [62]. Moreover, the 1.5 T scanners number is prevalent in the healthcare market, which naturally forces their use. Finally, deriving information from 1.5 T scanner data with texture analysis overcomes any possible data inaccuracy.
This study’s strengths can be concluded as follows: We prepared a dataset of MRI prostate scans that were supported with PIRADS grading. Next, within this study, it was presented that using the textural features to describe the MRI scan content, it is possible to achieve very high correspondence to the results of PIRADS protocol obtained manually. This allows for the optimistic approach for creation of automated systems to assess the MRI data in the future. It should also diminish the problems of intra-reader discrepancies and tiresome procedures.
There are also some weaknesses of the presented work. Firstly, we were limited by the number of accessible cases. This issue was considered and diminished by the proper choice of evaluation methods. Yet we hope that, having such promising results, it will be possible to prepare a much larger set of data. Next, in data acquisition, we concentrated on gathering a homogeneous study group. However, a comparative evaluation with MRI scans obtained from different equipment and assessed by other radiologists to prepare PIRADS scores would be beneficial. Another clear weakness of this study was the lack of homogeneity of the patients. This was partially caused by the assumption that we should work with data gathered during a standard diagnostic process. Next, the PIRADS grades were formed for patients with various ages, BMI, and overall health conditions.

5. Conclusions

In the current study, the feasibility of an automated approach to MR images of prostate was proposed. The results presented are promising, as the best F1-score of almost 77% was achieved for classification of patients into healthy (PIRADS < 4) and with PCa (PIRADS ≥ 4). This good result shows that it is possible to represent the prostate glands with textural features and apply machine learning algorithms to build models that enable easy recognition of the cases considered.
The results obtained allow the development of an objective prostate gland evaluation process and can be used in the future to prepare protocols for the automatic analysis of prostate images. This is especially welcomed by the medical community as prostate scan in MRI becomes a gold standard in the diagnostic process. However, there are many uncertainties due to interobserver differences in rating even among experienced radiologists. Therefore, a response supported by the automated approach may serve as a second opinion. Additionally, it could be a significant aid for less experienced physicians. In both cases, it should reduce possible diagnostic errors. There is a high demand for a unified system for determining the probability of prostate cancer. It can be developed in the future based on proposed approaches toward objectification of the diagnostic process with an increase in patient comfort and the legal safety of healthcare providers.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app13179871/s1, Table S1: All results for models built from pair of features in all considered experiment cases.

Author Contributions

Conceptualization, S.G., R.O., A.P. and K.N.; methodology, S.G., R.O., A.P. and K.N.; software, K.N.; validation, K.N.; formal analysis, S.G., R.O. and K.N.; investigation, S.G., R.O. and K.N.; resources, S.G., R.O. and A.P.; data curation, S.G., R.O., J.L. and A.P.; writing—original draft preparation, S.G., R.O. and K.N.; writing—review and editing, S.G., R.O. and K.N.; visualization, S.G., R.O., A.P. and K.N.; supervision, S.G., R.O. and K.N.; project administration, S.G., R.O., A.P. and K.N. Funding acquisition S.G. and R.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and was approved by the Institutional Review Board of Jagiellonian University (no 1072.6120.21.22, 23 February 2022).

Informed Consent Statement

This study was retrospective in nature; anonymized images acquired during routine examinations were used with the permission of the Bioethics Committee.

Data Availability Statement

One hundred and fourteen T2-weighted MRI 3D images of the prostate with corresponding segmentations of transition and peripheral zones are available on the Zenodo website, https://doi.org/10.5281/zenodo.7676958 (accessed on 30 July 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Culp, M.B.; Soerjomataram, I.; Efstathiou, J.A.; Bray, F.; Jemal, A. Recent Global Patterns in Prostate Cancer Incidence and Mortality Rates. Eur. Urol. 2020, 77, 38. [Google Scholar] [CrossRef] [PubMed]
  2. Leitzmann, M.F.; Rohrmann, S. Risk factors for the onset of prostatic cancer: Age, location, and behavioral correlates. Clin. Epidemiol. 2012, 4, 1–11. [Google Scholar] [CrossRef]
  3. Bell, K.J.; Del Mar, C.; Wright, G.; Dickinson, J.; Glasziou, P. Prevalence of incidental prostate cancer: A systematic review of autopsy studies. Int. J. Cancer 2015, 137, 1749. [Google Scholar] [CrossRef] [PubMed]
  4. Hemminki, K. Familial risk and familial survival in prostate cancer. World J. Urol. 2012, 30, 143. [Google Scholar] [CrossRef]
  5. Jansson, K.F.; Akre, O.; Garmo, H.; Bill-Axelson, A.; Adolfsson, J.; Stattin, P.; Bratt, O. Con-cordance of tumor differentiation among brothers with prostate cancer. Eur. Urol. 2012, 62, 656. [Google Scholar] [CrossRef]
  6. Randazzo, M.; Müller, A.; Carlsson, S.; Eberli, D.; Huber, A.; Grobholz, R.; Manka, L.; Mortezavi, A.; Sulser, T.; Recker, F.; et al. A positive family history as a risk factor for prostate cancer in a population-based study with organised prostate-specific antigen screening: Re-sults of the Swiss European Randomised Study of Screening for Prostate Cancer (ERSPC, Aarau). BJU Int. 2016, 117, 576. [Google Scholar] [CrossRef]
  7. Carvalhal, G.F.; Smith, D.S.; Mager, D.E.; Ramos, C.; Catalona, W.J. Digital rectal examination for detecting prostate cancer at prostate specific antigen levels of 4 ng./mL. or less. J. Urol. 1999, 161, 835. [Google Scholar] [CrossRef]
  8. Eldred-Evans, D.; Tam, H.; Sokhi, H.; Padhani, A.R.; Winkler, M.; Ahmed, H.U. Rethinking prostate cancer screening: Could MRI be an alternative screening test? Nat. Rev. Urol. 2020, 17, 526–539. [Google Scholar] [CrossRef]
  9. Nam, R.K.; Wallis, C.J.; Stojcic-Bendavid, J.; Milot, L.; Sherman, C.; Sugar, L.; Haider, M.A. A Pilot Study to Evaluate the Role of Magnetic Resonance Imaging for Prostate Cancer Screening in the General Population. J. Urol. 2016, 196, 361–366. [Google Scholar] [CrossRef]
  10. Eldred-Evans, D.; Burak, P.; Connor, M.J.; Day, E.; Evans, M.; Fiorentino, F.; Gammon, M.; Hosking-Jervis, F.; Klimowska-Nassar, N.; McGuire, W.; et al. Population-based prostate cancer screening with magnetic resonance imaging or ultrasonography: The IP1-PROSTAGRAM Study. JAMA Oncol. 2021, 7, 395. [Google Scholar] [CrossRef]
  11. Al Olama, A.A.; Dadaev, T.; Hazelett, D.J.; Li, Q.; Leongamornlert, D.; Saunders, E.J.; Stephens, S.; Cieza-Borrella, C.; Whitmore, I.; Garcia, S.B.; et al. Multiple novel prostate cancer susceptibility signals identified by fine-mapping of known risk loci among Europeans. Hum. Mol. Genet. 2015, 24, 5589. [Google Scholar] [CrossRef] [PubMed]
  12. Eeles, R.A.; The COGS–Cancer Research UK GWAS–ELLIPSE (part of GAME-ON) Initiative; Al Olama, A.A.; Benlloch, S.; Saunders, E.J.; Leongamornlert, D.; Tymrakiewicz, M.; Ghoussaini, M.; Luccarini, C.; Dennis, J.; et al. Identification of 23 new prostate cancer susceptibility loci using the iCOGS custom genotyping array. Nat. Genet. 2013, 45, 385. [Google Scholar] [CrossRef] [PubMed]
  13. Schumacher, F.R.; Al Olama, A.A.; Berndt, S.I.; Benlloch, S.; Ahmed, M.; Saunders, E.J.; Dadaev, T.; Leongamornlert, D.; Anokian, E.; Cieza-Borrella, C.; et al. Association analyses of more than 140,000 men identify 63 new prostate cancer susceptibility loci. Nat. Genet. 2018, 50, 928. [Google Scholar] [CrossRef] [PubMed]
  14. Ewing, C.M.; Ray, A.M.; Lange, E.M.; Zuhlke, K.A.; Robbins, C.M.; Tembe, W.D.; Wiley, K.E.; Isaacs, S.D.; Johng, D.; Wang, Y.; et al. Germline Mutations in HOXB13 and Prostate-Cancer Risk. N. Engl. J. Med. 2012, 366, 141. [Google Scholar] [CrossRef]
  15. Lynch, H.T.; Kosoko-Lasaki, O.; Leslie, S.W.; Rendell, M.; Shaw, T.; Snyder, C.; D’Amico, A.V.; Buxbaum, S.; Isaacs, W.B.; Loeb, S.; et al. Screening for familial and hereditary prostate cancer. Int. J. Cancer 2016, 138, 2579. [Google Scholar] [CrossRef]
  16. Giri, V.N.; Knudsen, K.E.; Kelly, W.K.; Cheng, H.H.; Cooney, K.A.; Cookson, M.S.; Dahut, W.; Weissman, S.; Soule, H.R.; Petrylak, D.P.; et al. Implementation of Germline Testing for Prostate Cancer: Philadelphia Prostate Cancer Consensus Conference 2019. J. Clin. Oncol. 2020, 38, 2798. [Google Scholar] [CrossRef]
  17. Dickinson, L.; Ahmed, H.U.; Allen, C.; Barentsz, J.O.; Carey, B.; Futterer, J.J.; Heijmink, S.W.; Hoskin, P.J.; Kirkham, A.; Padhani, A.R.; et al. Magnetic Resonance Imaging for the Detection, Localisation, and Characterisation of Prostate Cancer: Recommendations from a European Consensus Meeting. Eur. Urol. 2011, 59, 477–494. [Google Scholar] [CrossRef]
  18. Barentsz, J.O.; Richenberg, J.; Clements, R.; Choyke, P.; Verma, S.; Villeirs, G.; Rouviere, O.; Logager, V.; Fütterer, J.J. European Society of Urogenital Radiology ESUR prostate MR guidelines 2012. Eur. Radiol. 2012, 22, 746–757. [Google Scholar] [CrossRef]
  19. Walker, S.M.; Türkbey, B. PI-RADSv2.1: Current status. Turk J. Urol. 2021, 47 (Supp. 1), S45–S48. [Google Scholar] [CrossRef]
  20. Weinreb, J.C.; Barentsz, J.O.; Choyke, P.L.; Cornud, F.; Haider, M.A.; Macura, K.J.; Margolis, D.; Schnall, M.D.; Shtern, F.; Tempany, C.M.; et al. 12 PI-RADS prostate imaging—Reporting and data system: 2015, Version 2. Eur. Urol. 2016, 69, 16–40. [Google Scholar] [CrossRef]
  21. Zhang, L.; Tang, M.; Chen, S.; Lei, X.; Zhang, X.; Huan, Y. A meta-analysis of use of prostate imaging reporting and data dystem version 2 (PI-RADS V2) with multiparametric MR imaging for the detection of prostate cancer. Eur. Radiol. 2017, 27, 5204–5214. [Google Scholar] [CrossRef] [PubMed]
  22. Turkbey, B.; Rosenkrantz, A.B.; Haider, M.A.; Padhani, A.R.; Villeirs, G.; Macura, K.J.; Tempany, C.M.; Choyke, P.L.; Cornud, F.; Margolis, D.J.; et al. Prostate imaging reporting and data system version 2.1: 2019 update of prostate im-aging reporting and data system version 2. Eur. Urol. 2019, 76, 340–351. [Google Scholar] [CrossRef]
  23. Barentsz, J.O.; Weinreb, J.C.; Verma, S.; Thoeny, H.C.; Tempany, C.M.; Shtern, F.; Padhani, A.R.; Margolis, D.; Macura, K.J.; Haider, M.A.; et al. Synopsis of the PI-RADS v2 Guidelines for Multiparametric Prostate Magnetic Resonance Imaging and Recommendations for Use. Eur. Urol. 2016, 69, 41–49. [Google Scholar] [CrossRef] [PubMed]
  24. Muller, B.G.; Shih, J.H.; Sankineni, S.; Marko, J.; Rais-Bahrami, S.; George, A.K.; de la Rosette, J.J.M.C.H.; Merino, M.J.; Wood, B.J.; Pinto, P.; et al. Prostate Cancer: Interobserver Agreement and Accuracy with the Revised Prostate Imaging Reporting and Data System at Multiparametric MR Imaging. Radiology 2015, 277, 741–750. [Google Scholar] [CrossRef]
  25. Drost, F.-J.H.; Osses, D.F.; Nieboer, D.; Steyerberg, E.W.; Bangma, C.H.; Roobol, M.J.; Schoots, I.G. Prostate MRI, with or without MRI-targeted biopsy, and systematic biopsy for detecting prostate cancer. Cochrane Database Syst. Rev. 2019, 4, CD012663. [Google Scholar] [CrossRef] [PubMed]
  26. Kasivisvanathan, V.; Rannikko, A.S.; Borghi, M.; Panebianco, V.; Mynderse, L.A.; Vaarala, M.H.; Briganti, A.; Budäus, L.; Hellawell, G.; Hindley, R.G.; et al. MRI-Targeted or Standard Biopsy for Prostate-Cancer Diagnosis. N. Engl. J. Med. 2018, 378, 1767. [Google Scholar] [CrossRef]
  27. Rouvière, O.; Puech, P.; Renard-Penna, R.; Claudon, M.; Roy, C.; Mège-Lechevallier, F.; Decaussin-Petrucci, M.; Dubreuil-Chambardel, M.; Magaud, L.; Remontet, L.; et al. Use of prostate systematic and targeted biopsy on the basis of multiparametric MRI in biopsy-naive patients (MRI-FIRST): A prospective, multicentre, paired diagnostic study. Lancet Oncol. 2019, 20, 100. [Google Scholar] [CrossRef]
  28. van der Leest, M.; Cornel, E.; Israël, B.; Hendriks, R.; Padhani, A.R.; Hoogenboom, M.; Zamecnik, P.; Bakker, D.; Setiasti, A.Y.; Veltman, J.; et al. Head-to-head Comparison of Transrectal Ultrasound-guided Prostate Biopsy Versus Multiparametric Prostate Resonance Imaging with Subsequent Magnetic Resonance-guided Biopsy in Biopsy-naïve Men with Elevated Prostate-specific Antigen: A Large Prospective Multicenter Clinical Study. Eur. Urol. 2019, 75, 570. [Google Scholar]
  29. Woo, S.; Suh, C.H.; Kim, S.Y.; Cho, J.Y.; Kim, S.H. Diagnostic performance of prostate imaging reporting and data system version 2 for detection of prostate cancer: A systematic review and diagnostic meta-analysis. Eur. Urol. 2017, 72, 177–188. [Google Scholar] [CrossRef]
  30. Eastham, J.A.; Auffenberg, G.B.; Barocas, D.A.; Chou, R.; Crispino, T.; Davis, J.W.; Eggener, S.; Horwitz, E.M.; Kane, C.J.; Kirkby, E.; et al. Clinically Localized Prostate Cancer: AUA/ASTRO Guideline, Part I: Introduction, Risk Assessment, Staging, and Risk-Based Management. J. Urol. 2022, 208, 10–18. [Google Scholar] [CrossRef]
  31. Johnson, D.C.; Raman, S.S.; Mirak, S.A.; Kwan, L.; Bajgiran, A.M.; Hsu, W.; Maehara, C.K.; Ahuja, P.; Faiena, I.; Pooli, A.; et al. Detection of Individual Prostate Cancer Foci via Multiparametric Magnetic Resonance Imaging. Eur. Urol. 2019, 75, 712–720. [Google Scholar] [CrossRef] [PubMed]
  32. Lee, M.S.; Moon, M.H.; Kim, Y.A.; Sung, C.K.; Woo, H.; Jeong, H.; Son, H. Is prostate imaging reporting and data system version 2 sufficiently discovering clinically significant prostate cancer? Per-lesion radiology-pathology correlation study. AJR Am. J. Roentgenol. 2018, 211, 114–120. [Google Scholar] [CrossRef] [PubMed]
  33. Kundel, H.L. Reader error, object recognition, and visual search. Radiology 2002, 222, 453–459.24. [Google Scholar]
  34. Manning, D.J.; Gale, A.; Krupinski, E.A. Perception research in medical imaging. Br. J. Radiol. 2005, 78, 683–685. [Google Scholar] [CrossRef] [PubMed]
  35. Rosenkrantz, A.B.; Ayoola, A.; Hoffman, D.; Khasgiwala, A.; Prabhu, V.; Smereka, P.; Somberg, M.; Taneja, S.S. The Learning Curve in Prostate MRI Interpretation: Self-Directed Learning Versus Continual Reader Feedback. Am. J. Roentgenol. 2017, 208, W92–W100. [Google Scholar] [CrossRef]
  36. Rosenkrantz, A.B.; Ginocchio, L.A.; Cornfeld, D.; Froemming, A.T.; Gupta, R.T.; Turkbey, B.; Westphalen, A.C.; Babb, J.S.; Margolis, D.J.; Shankar, P.R.; et al. Interobserver Reproducibility of the PI-RADS Version 2 Lexicon: A Multicenter Study of Six Experienced Prostate Radiologists. Radiology 2016, 280, 793–804. [Google Scholar] [CrossRef]
  37. Flood, T.F.; Pokharel, S.S.; Patel, N.U.; Clark, T.J. Accuracy and Interobserver Variability in Reporting of PI-RADS Version 2. J. Am. Coll. Radiol. 2017, 14, 1202–1205. [Google Scholar] [CrossRef]
  38. Purysko, A.S.; Bittencourt, L.K.; Bullen, J.A.; Mostardeiro, T.R.; Herts, B.R.; Klein, E.A. Accuracy and Interobserver Agreement for Prostate Imaging Reporting and Data System, Version 2, for the Characterization of Lesions Identified on Multiparametric MRI of the Prostate. Am. J. Roentgenol. 2017, 209, 339–349. [Google Scholar] [CrossRef]
  39. Glazer, D.I.; Mayo-Smith, W.W.; Sainani, N.I.; Sadow, C.A.; Vangel, M.G.; Tempany, C.M.; Dunne, R.M. Interreader Agreement of Prostate Imaging Reporting and Data System Version 2 Using an In-Bore MRI-Guided Prostate Biopsy Cohort: A Single Institution’s Initial Experience. Am. J. Roentgenol. 2017, 209, W145–W151. [Google Scholar] [CrossRef]
  40. Chen, F.; Cen, S.; Palmer, S. Application of prostate imaging reporting and data system version 2 (PI-RADS v2): Interobserver agreement and positive predictive value for localization of intermediate- and high-grade prostate cancers on multiparametric magnetic resonance imaging. Acad. Radiol. 2017, 24, 1101–1106. [Google Scholar] [CrossRef]
  41. Syer, T.; Mehta, P.; Antonelli, M.; Mallett, S.; Atkinson, D.; Ourselin, S.; Punwani, S. Artificial Intelligence Compared to Radiologists for the Initial Diagnosis of Prostate Cancer on Magnetic Resonance Imaging: A Systematic Review and Recommendations for Future Studies. Cancers 2021, 13, 3318. [Google Scholar] [CrossRef]
  42. World Medical Association. Issue Information-Declaration of Helsinki. J. Bone Miner. Res. 2018, 33, 34–51. [Google Scholar] [CrossRef]
  43. Szczypinski, P.M.; Klepaczko, A.; Kociolek, M. QMaZda—Software tools for image analysis and pattern recognition. In Proceedings of the Signal Processing—Algorithms, Architectures, Arrangements, and Ap-plications Conference Proceedings—SPA 2017, Poznan, Poland, 20–22 September 2017. [Google Scholar]
  44. Vipin, T. Understanding Digital Image Processing; CRC Press: Boca Raton, FL, USA, 2021. [Google Scholar]
  45. Haralick, R.M. Statistical and structural approaches to texture. Proc. IEEE 1979, 67, 786–804.71. [Google Scholar] [CrossRef]
  46. Kashyap, R.; Chellappa, R. Estimation and choice of neighbors in spatial-interaction models of images. IEEE Trans. Inf. Theory 1983, 29, 60–72. [Google Scholar] [CrossRef]
  47. Porter, R.; Canagarajah, N. Rotation invariant texture classification schemes using GMRFs and wavelets. In Proceedings IWISP’96; Elsevier Science Ltd.: Amsterdam, The Netherlands, 1996; pp. 183–186. [Google Scholar]
  48. Galloway, M.M. Texture analysis using grey level run lengths. Comput. Graph. Image Process 1975, 4, 172–179. [Google Scholar] [CrossRef]
  49. Ahonen, T.; Hadid, A.; Pietikainen, M. Face Description with Local Binary Patterns: Application to Face Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 2037–2041. [Google Scholar] [CrossRef]
  50. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
  51. de Rooij, M.; Israël, B.; Tummers, M.; Ahmed, H.U.; Barrett, T.; Giganti, F.; Hamm, B.; Løgager, V.; Padhani, A.; Panebianco, V.; et al. ESUR/ESUI consensus statements on multi-parametric MRI for the detection of clinically significant prostate cancer: Quality re-quirements for image acquisition, interpretation and radiologists’ training. Eur. Radiol. 2020, 30, 5404. [Google Scholar] [CrossRef] [PubMed]
  52. Labus, S.; Altmann, M.M.; Huisman, H.; Tong, A.; Penzkofer, T.; Choi, M.H.; Shabunin, I.; Winkel, D.J.; Xing, P.; Szolar, D.H.; et al. A concurrent, deep learning–based computer-aided detection system for prostate multiparametric MRI: A performance study involving experienced and less-experienced radiologists. Eur. Radiol. 2023, 33, 64–76. [Google Scholar] [CrossRef]
  53. Winkel, D.J.; Tong, A.; Lou, B.; Kamen, A.; Comaniciu, D.; Disselhorst, J.A.; Rodríguez-Ruiz, A.; Huisman, H.; Szolar, D.; Shabunin, I.; et al. A novel deep learning based computer aided diagnosis system improves the accuracy and efficiency of radiologists in reading bi-parametric magnetic resonance images of the prostate: Results of a multireader, multicase study. Investig. Radiol. 2021, 56, 605–613. [Google Scholar] [CrossRef] [PubMed]
  54. Sushentsev, N.; Da Silva, N.M.; Yeung, M.; Barrett, T.; Sala, E.; Roberts, M.; Rundo, L. Comparative performance of fully-automated and semi-automated artificial intelligence methods for the detection of clinically significant prostate cancer on MRI: A systematic review. Insights Imaging 2022, 13, 59. [Google Scholar] [CrossRef]
  55. Roest, C.; Kwee, T.; Saha, A.; Fütterer, J.; Yakar, D.; Huisman, H. AI-assisted biparametric MRI surveillance of prostate cancer: Feasibility study. Eur. Radiol. 2023, 33, 89–96. [Google Scholar] [CrossRef] [PubMed]
  56. Smith, C.P.; Harmon, S.A.; Barrett, T.; Bittencourt, L.K.; Law, Y.M.; Shebel, H.; An, J.Y.; Czarniecki, M.; Mehralivand, S.; Coskun, M.; et al. Intra- and interreader reproducibility of PI-RADSv2: A multireader study. J. Magn. Reson. Imaging 2019, 49, 1694–1703. [Google Scholar] [CrossRef]
  57. Liu, Y.; Zheng, H.; Liang, Z.; Miao, Q.; Brisbane, W.G.; Marks, L.S.; Raman, S.S.; Reiter, R.E.; Yang, G.; Sung, K. Textured-Based Deep Learning in Prostate Cancer Classification with 3T Multiparametric MRI: Comparison with PI-RADS-Based Classification. Diagnostics 2021, 11, 1785. [Google Scholar] [CrossRef]
  58. Arslan, A.; Alis, D.; Erdemli, S.; Seker, M.E.; Zeybel, G.; Sirolu, S.; Kurtcan, S.; Karaarslan, E. Does deep learning software improve the consistency and performance of radiologists with various levels of experience in assessing bi-parametric prostate MRI? Insights Imaging 2023, 14, 48. [Google Scholar] [CrossRef] [PubMed]
  59. Giannini, V.; Mazzetti, S.; Armando, E.; Carabalona, S.; Russo, F.; Giacobbe, A.; Muto, G.; Regge, D. Multiparametric magnetic resonance imaging of the prostate with computer-aided detection: Experienced observer performance study. Eur. Radiol. 2017, 27, 4200–4208. [Google Scholar] [CrossRef]
  60. Giannini, V.; Mazzetti, S.; Defeudis, A.; Stranieri, G.; Calandri, M.; Bollito, E.; Bosco, M.; Porpiglia, F.; Manfredi, M.; De Pascale, A.; et al. A Fully Automatic Artificial Intelligence System Able to Detect and Characterize Prostate Cancer Using Multiparametric MRI: Multicenter and Multi-Scanner Validation. Front. Oncol. 2021, 11, 718155. [Google Scholar] [CrossRef]
  61. Bhattacharya, I.; Khandwala, Y.S.; Vesal, S.; Shao, W.; Yang, Q.; Soerensen, S.J.; Fan, R.E.; Ghanouni, P.; Kunder, C.A.; Brooks, J.D.; et al. A review of artificial intelligence in prostate cancer detection on imaging. Ther. Adv. Urol. 2022, 14, 17562872221128791. [Google Scholar] [CrossRef] [PubMed]
  62. Virarkar, M.; Szklaruk, J.; Diab, R.; Bassett, J.R.; Bhosale, P. Diagnostic value of 3.0 T versus 1.5 T MRI in staging prostate cancer: Systematic review and meta-analysis. Pol. J. Radiol. 2022, 87, e421–e429. [Google Scholar] [CrossRef]
Figure 1. The histogram presents the number of patients having the same number of images in MRI scans. The number of analysed images per patient differs due to gland size and zone margins.
Figure 1. The histogram presents the number of patients having the same number of images in MRI scans. The number of analysed images per patient differs due to gland size and zone margins.
Applsci 13 09871 g001
Figure 2. Prostate MRI scan with overlayed manual annotation of the central (blue) and peripheral (orange) zones.
Figure 2. Prostate MRI scan with overlayed manual annotation of the central (blue) and peripheral (orange) zones.
Applsci 13 09871 g002
Figure 3. The figure presents distribution of the best 500 classification models in the results space.
Figure 3. The figure presents distribution of the best 500 classification models in the results space.
Applsci 13 09871 g003
Table 1. Values of parameters following the PIRADS guidance to perform MRI scans.
Table 1. Values of parameters following the PIRADS guidance to perform MRI scans.
ParameterValue
Time echo105
Relaxation time3320
Flip angle160
Imaging matrix256 × 320
Voxel0.6 × 0.6 × 2
Field of view200
Concentrations2
Averages4
Parallel acquisition2
Distance factor0
Overall sequence time300
T2-weighted (axial).
Table 2. The best 10 pair of features when central zone is used to calculate the features.
Table 2. The best 10 pair of features when central zone is used to calculate the features.
AccuracyPrecisionRecallF1-ScoreFeature 1Feature 2
0.81520.84850.70000.7671YN7GlcmZ3DifEntrpYS8ArmTeta2
0.76090.93940.60780.7381YS7ArmTeta2YD8GradMean
0.75000.93940.59620.7294YS8ArmTeta2YD8GradMean
0.75000.90910.60000.7229YS6ArmTeta2YD8HistSkewness
0.73910.93940.58490.7209YS6ArmTeta2YD8GradMean
0.76090.84850.62220.7180YN7GlcmN4DifEntrpYS8ArmTeta2
0.75000.87880.60420.7161YS7GlcmN4DifEntrpYS8ArmTeta2
0.75000.87880.60420.7161YS6GlcmN5DifEntrpYS8ArmTeta2
0.73910.90910.58820.7143YM4GlcmH1SumVarncYD8Gab12V6Mag
0.73910.87880.59180.7073YS8GlcmZ3DifEntrpYS8ArmTeta2
Table 3. The best 10 pairs when peripheral zone is used to calculate the features.
Table 3. The best 10 pairs when peripheral zone is used to calculate the features.
AccuracyPrecisionRecallF1-ScoreFeature 1Feature 2
0.73630.87500.58330.7000YS6GlcmV3SumEntrpYD8HistMean
0.72530.90630.56860.6988YS6GlcmV3InvDfMomYD8HistMean
0.73630.81250.59090.6842YN5ArmTeta2YD8HistMean
0.72530.84380.57450.6835YN4GlcmN3InvDfMomYD8HistMean
0.73630.78130.59520.6757YN5GlcmV3DifEntrpYD8HistMean
0.71430.84380.56250.6750YM4GlcmN3SumOfSqsYS8DwtHaarS1HH
0.70330.87500.54900.6747YN5GlcmH4EntropyYD8HistKurtosis
0.70330.87500.54900.6747YN7HogO8b3YM8HistPerc01
0.70330.84380.55100.6667YM4GlcmH3DifEntrpYD8HistMean
0.68130.90630.52730.6667YN6GlcmH2SumEntrpYD8HistMean
Table 4. The best 10 pair of features when both (central and peripheral) regions are used to calculate features.
Table 4. The best 10 pair of features when both (central and peripheral) regions are used to calculate features.
AccuracyPrecisionRecallF1-ScoreFeature 1Feature 2
0.75000.87880.60420.7161YM4GlcmZ3ContrastYD8HistMean
0.75000.84850.60870.7089YM6GlcmN4SumEntrpYD5ArmTeta4
0.75000.84850.60870.7089YM5GlcmZ5EntropyYD5ArmTeta4
0.73910.87880.59180.7073YM6GlcmZ2SumEntrpYD5ArmTeta4
0.75000.81820.61360.7013YM7GlcmZ4SumVarncYD5ArmTeta4
0.75000.81820.61360.7013YM6GlcmZ4SumVarncYD5ArmTeta4
0.75000.81820.61360.7013YM5GlcmH4SumEntrpYD5ArmTeta4
0.75000.81820.61360.7013YM5GlcmZ4SumVarncYD5ArmTeta4
0.73910.84850.59570.7000YM6GlcmH2SumEntrpYD5ArmTeta4
0.72830.87880.58000.6988YM5GlcmN2SumVarncYD5ArmTeta4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gibała, S.; Obuchowicz, R.; Lasek, J.; Piórkowski, A.; Nurzynska, K. Textural Analysis Supports Prostate MR Diagnosis in PIRADS Protocol. Appl. Sci. 2023, 13, 9871. https://doi.org/10.3390/app13179871

AMA Style

Gibała S, Obuchowicz R, Lasek J, Piórkowski A, Nurzynska K. Textural Analysis Supports Prostate MR Diagnosis in PIRADS Protocol. Applied Sciences. 2023; 13(17):9871. https://doi.org/10.3390/app13179871

Chicago/Turabian Style

Gibała, Sebastian, Rafał Obuchowicz, Julia Lasek, Adam Piórkowski, and Karolina Nurzynska. 2023. "Textural Analysis Supports Prostate MR Diagnosis in PIRADS Protocol" Applied Sciences 13, no. 17: 9871. https://doi.org/10.3390/app13179871

APA Style

Gibała, S., Obuchowicz, R., Lasek, J., Piórkowski, A., & Nurzynska, K. (2023). Textural Analysis Supports Prostate MR Diagnosis in PIRADS Protocol. Applied Sciences, 13(17), 9871. https://doi.org/10.3390/app13179871

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop