Next Article in Journal
Advances in the Management of Central Nervous System Metastases in Non-Small Cell Lung Cancer
Next Article in Special Issue
Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis
Previous Article in Journal
Tumour Cell Seeding to Lymph Nodes from In Situ Colorectal Cancer
Previous Article in Special Issue
APESTNet with Mask R-CNN for Liver Tumor Segmentation and Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Series-Based Deep Learning Approach to Lung Nodule Image Classification

1
Faculty of Science, Mathematics Department, Muğla Sıtkı Koçman University, 48000 Muğla, Turkey
2
Department of Business, Babeş-Bolyai University, 400174 Cluj-Napoca, Romania
3
Faculty of Economics, “1 Decembrie 1918” University of Alba Iulia, 510009 Alba Iulia, Romania
*
Authors to whom correspondence should be addressed.
Cancers 2023, 15(3), 843; https://doi.org/10.3390/cancers15030843
Submission received: 19 November 2022 / Revised: 24 January 2023 / Accepted: 28 January 2023 / Published: 30 January 2023

Abstract

:

Simple Summary

Medical image classification is an important task in computer-aided diagnosis, medical image acquisition, and mining. Although deep learning has been shown to outperform traditional methods based on handcrafted features, it remains difficult due to significant intra-class variation and inter-class similarity caused by the diversity of imaging modalities and clinical pathologies. This study presents an innovative method that is an intersection between 3D image analysis and series classification problems. Therefore, the self-similarity features in medical images are captured by converting the regions of interest to series with a radial scan and these series are classified with U-shape convolutional neural networks. The findings of this study are expected to be used by researchers from various disciplines working on radial scanned images, as well as researchers working on artificial intelligence in health.

Abstract

Although many studies have shown that deep learning approaches yield better results than traditional methods based on manual features, CADs methods still have several limitations. These are due to the diversity in imaging modalities and clinical pathologies. This diversity creates difficulties because of variation and similarities between classes. In this context, the new approach from our study is a hybrid method that performs classifications using both medical image analysis and radial scanning series features. Hence, the areas of interest obtained from images are subjected to a radial scan, with their centers as poles, in order to obtain series. A U-shape convolutional neural network model is then used for the 4D data classification problem. We therefore present a novel approach to the classification of 4D data obtained from lung nodule images. With radial scanning, the eigenvalue of nodule images is captured, and a powerful classification is performed. According to our results, an accuracy of 92.84% was obtained and much more efficient classification scores resulted as compared to recent classifiers.

1. Introduction

Cancer is one of today’s most serious health issues. Despite significant and promising advances in medicine, the desired level of prevention and elimination of many cancers has yet to be achieved [1,2,3]. Cancer is a common disease that is difficult, time-consuming, and challenging to treat. It is diverse with numerous subtypes. Some types of cancer, which are common in most people, are lethal. Cancer treatment is a difficult process, and early detection is critical. Early cancer diagnosis can be helped by a clinical follow-up of the patient in later stages. In this context, screening is the search for the presence of cancer cells in humans who have no symptoms. Screening stages are the most important steps in the fight against cancer because they are required for early diagnosis. Information obtained by imaging methods is used to determine the cancer type and its stage, which are extremely useful for disease treatment planning. As a result, the accuracy of information obtained by scanning methods can change the outcome of the disease. Patients can live a longer and more fulfilling life due to correct screening methods and treatment plans that are determined in conjunction with accurate analyses. The application of advanced technology in cancer imaging, which is required for a patient’s treatment plan, as well as correct evaluation, are highly effective for determining treatment plans. Patients who have the opportunity to benefit from proper imaging techniques gain an advantage during the difficult treatment process by correctly analyzing imaging data.
Due to the high cost of equipment and personnel, as well as the difficulty of the task, it is not possible to apply known screening programs to every person. Lung nodules come in a wide range of shapes and sizes, hence identifying and characterizing abnormalities in these nodules is a difficult and delicate task. In this regard, computer-aided diagnosis (CAD) systems are critical to make clinicians’ jobs easier.
Image processing and machine learning-based research on digital pathology image classification have yielded promising results. These findings suggest that digital pathology systems based on machine learning could be widely used in pathology clinics. Artificial intelligence and machine learning-based solutions will be used at a much higher rate in the coming years, particularly in pathology.
The mortality rate from lung cancer is the greatest of any kind of cancer, although this is a disease whose prognosis may be improved with early diagnosis. In order to establish which pulmonary nodules are benign and which nodules need biopsy to confirm malignancy, low-dose computed tomography has become the standard procedure for lung cancer screening. Nevertheless, lung cancer screening has a significant clinical false-positive rate because of the necessity to identify a high proportion of malignant nodules for biopsy [4,5]. Due to this, many unnecessary biopsies are conducted on people who turn out not to have cancer.
In this study, we provide a CNN architecture that combines data from volumetric radiomics series and nodule images for categorization. Qualitative and quantitative characteristics may be found in lung CT images. These characteristics illustrate the nodule’s pathogenesis. Using mathematics and data characterization methods, these quantitative characteristics are retrieved from the picture. The term “radiomic” is used to describe the procedure, whereas “radiomic features” refers to the numerical characteristics that are gleaned from the data. As defined in [6], this process involves “high-throughput extraction of quantitative information from radiological pictures to build a radiomic, high-dimensional dataset followed by data mining for possibly better decision support.” The radiomic characteristics of nodules primarily include their morphology, shape, and gray-level distribution. This research uses a spherical radial scan of a 3D model derived from CT scans to decode information about the nodule’s volume and shape over time. The created regions in each level plane are scanned radially while the planes themselves are shifted from bottom to top. Thus, the shape shift may be considered with the gray level distributions of the CT scans collected at the various stages. Using the LIDC-IDRI dataset, we take a novel method to predict the malignancy of lung nodules by integrating hitherto unexplored image and volumetric radiomic combinations with volumetric radiomics-induced series.
CAD methods still have several limitations, despite numerous studies demonstrating that deep learning approaches outperform traditional methods based on manual features [7,8,9,10]. This is due to the fact that imaging modalities and clinical pathologies differ. Such diversity creates difficulties because of differences and similarities between classes. In this context, the new approach in our study is a hybrid method that classifies data using both medical image analysis and series features. Image-derived interest areas are subjected to a radial scan, with their centers acting as poles, in order to obtain series. A convolutional neural network (CNN) model is used to solve the series classification problem. We advance a method for classifying series obtained from lung nodule images. The eigenvalue of the nodule images is captured using radial scanning and a powerful classification is performed. According to our results, we obtained an accuracy of 92.44% and significantly higher classification scores as compared to numerous traditional classifiers.

Related Works

Many pre-diagnosis models capitalize the advantage of CNN architectures that revolutionized computer vision research by making color images usable as input data. In this context, input data are processed by a succession of cores that slide over image color channels to extract characteristics such as edges and color gradients, giving the appearance of an artificial neural network’s (ANN) downstream fully linked layers. These inputs are summed and flattened before being sent on to the fully linked layer. Several different kinds of preconfigured CNN architectures are available. Radiology and digital pathology both benefit greatly from the usage of CNNs.
There has been extensive research into the development of CAD systems for lung cancer screening. Detection and segmentation of pulmonary nodules, characterization of nodules, and classification of malignancy were among the studies that stood out. Recently, very good and promising results in lung cancer screening, as well as other cancer screenings, have been obtained, with deep learning-supported studies on nodule detection, segmentation, and characterization [11,12,13].
Capabilities of CADs and radiomic tools to improve diagnostic accuracy and consistency across medical images help radiologists’ decision-making [14,15]. CADs and radiomics rely on segmentation and quantitative feature extraction from images of identified nodules as its foundation. Moreover, machine learning algorithms use this collection of properties as a training set for classifying unseen nodule samples [16,17]. Such studies focus on the intranodular region and employ radiomic characteristics of its shape, boundary, and tissue for the most accurate identification [18,19,20,21,22].
Deep learning saves time for medical professionals by performing the complex classification task, which requires a significant amount of time and effort and consists of the classification of large amounts of images, while avoiding possible human-induced lines during the diagnosis phase at the same time [23,24,25]. Although it is well known that accurate and early diagnosis are effective in all disease types, deep learning-based methods have been successfully applied in early diagnosis, which is a crucial stage in cancer disease [26,27,28]. Deep network architectures have evolved and their computational power has increased as deep learning models have advanced in specific tasks. Deep neural networks have begun to be used effectively in computer vision processes such as image classification, object detection, and image segmentation as CNNs have made significant progress. Deep learning and CNN advancements have been critical in the development of medical systems for reliable scanning and image-based diagnostics. As a result, research has progressed from image segmentation and feature extraction to deep learning-based automatic classification [29,30].
Abdoulaye et al. [31] classified mammography images into three stages. First, they removed noise from the image by examining its surroundings, then they discovered the physical properties of the object and extracted patterns. In this way, they were able to create a cancer detection system based on the artificial intelligence-enabled algorithm that they trained using a pattern they obtained. Wang et al. [32] used an automatic image analysis technique to classify breast cancer histopathology images. They obtained 4 shape-based features and 138 color-space features for nodule classification. As a preprocessing step, they used bottom-up cap transformation to highlight background objects in order to locate growing cancer cells. Afterwards, they used wavelet transform to determine the location of ROIs, and as a result, they classified normal and malignant cell images with a 96.19% success rate. Jiang et al. [33] developed their own method by studying lymphatic pathologies such as chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). After preprocessing the image, they extracted a feature set that included texture properties such as entropy, density mean, density standard deviation, loopy back propagation, and gray level co-occurrence matrix. They used the support vector machine (SVM) algorithm to classify pathology images based on the extracted features. As a result, their average accuracy performance value was 97.96%. Mohammed et al. [34] trained ANNs to predict pancreatic cancer risk using clinical variables such as age, smoking status, alcohol consumption, and ethnicity.
Busnatu et al. [35] and Hunter et al. [36] present a detailed account of the recent literature studies on artificial intelligence and deep learning applications classified according to medical specialties. Readers can refer to these two studies for more comprehensive information on deep learning applications regarding cancer diagnosis based on image analysis.
Image series can be created by taking temporary images of the same scene at different ordered input. If each sequential input corresponds to the time tick, it is possible to say that the obtained series are time series. Several researchers have developed effective methods for correctly interpreting image time-series data as a result of acquiring image data [37,38,39,40,41,42,43,44,45,46,47]. With early diagnosis and a correct treatment, the quality of patients’ lives can be substantially improved due to the analysis of biomedical time series via accurate and reliable techniques, the understanding of such data, and the rapid detection of possible abnormalities. The use of temporal correlation in time-series analysis is critical to the success of chosen methods. In this context, image time series are critical in biomedicine for monitoring disease progression.
Iakovidis et al. [37] used time series obtained from chest radiographs to track the progression of pneumonia. Contrariwise, Baur et al. [38] used canonical correlation analysis and Dynamic Bayesian Networks (DBN) to extract validated gene regulatory networks from time-series gene expression data. Likewise, Guo et al. [39] built gene regulatory networks with a feature selection algorithm based on partial least squares (PLS). In their studies, Penfold et al. [40] and Isci et al. [41] introduced Bayesian methodologies for network analysis using biological data, especially measures of time-series gene expression. Schlitt et al. [42] used Bayes and DBNs to explain gene expression variations over time in terms of regulatory network topologies. According to Ni et al. [43] and Kim et al. [44], the study of Murphy et al. [45] suggested techniques capable of expressing time-varying behavior of the underlying biological network, hence offering a more accurate representation of spatio-temporal input–output connections. In their work, Kourou et al. [46] used time-series microarray gene expression data to classify differentially expressed genes (DEGs) in cancer with great effectiveness. Imani et al. [47] expanded the analysis of radio frequency (RF) time series to enhance tissue classification at clinical frequencies by using additional time-series spectrum characteristics.
Various non-local deep learning architectures, which we also used in the comparison analysis, have been successfully used in the nodule classification task. Shen et al. [48] proposed multi-crop convolutional neural networks and Al-Shabi et al. [49] advanced gated-dilated networks for malignancy classification and obtained above 87% accuracy scores. Moreover, Ren et al. [50] built a unique manifold regularized classification deep neural network (MRC-DNN) to conduct classification directly based on the manifold representation of lung nodule images, which was motivated by the observation that genuine structure among data was typically contained on a low-dimensional manifold. Shen et al. [51] showed that the resilience of a representative DL-based lung-nodule classification model for CT images could be improved, highlighting the need of assessing and assuring model robustness while creating comparable models. To increase the depth of representation, Jiang et al. [52] first developed a contextual attention mechanism to model contextual relationships between neighboring sites. Then, authors employed a spatial attention technique to automatically find the zones that were crucial for nodule categorization. Finally, they used an ensemble of models to increase the reliability of their predictions. Al-Shabi et al. [53] suggested using residual blocks for local feature extraction and non-local blocks for global feature extraction. Furthermore, Al-Shabi et al. [54] used 3D Axial-Attention, which only needs a little amount of processing power as compared to a traditional non-local network.

2. Methodology

The 3D volumetric structure comprises the sections designated as nodules by radiologists from 2D CT scans, together with the series derived from the boundary curves of each section. The following paragraphs explain boundary curves and the process of extracting series out of them. Moreover, details on 3D models and the underlying deep learning framework are provided.

2.1. Series by Radial Scanning

A radial scan gathers image samples in a sparser distribution at the periphery of the image and in a denser distribution closer to the center of the image. This is the preferred scanning paradigm for several imaging applications, such as imaging the optic nerve head, as each B-scan acquired includes a cross-sectional image of the optic cup [55,56,57]. The volumetric, render, and morphometric analysis of the ensuing image may be used to see and analyze the radially obtained data samples. A straightforward radial-to-Cartesian coordinate translation may be used to resample data to a Cartesian mesh system.
Figure 1 provides a radial scan as an illustration. The region of interest of a lung nodule imaging is shown in Figure 1a. The radial scan axis is positioned at the center of the area of interest, and the boundary curve of the area of interest is depicted in Figure 1b. The boundary curve points’ separation from the scanning center will vary as the scanning angle changes, resulting in a series, as illustrated in Figure 1c.
ROIs are portions of a designated data collection that are used for a certain objective. The term ROI is often used in a variety of application fields. For instance, in medical imaging, the borders of a tumor can be specified in an image or a volume to determine its size. For the purpose of assessing cardiac function, the endocardial boundary can be seen on an image at various points in the cardiac cycle, such as end-systole and end-diastole. The ROI establishes the perimeters of an item under inspection in computer vision and optical character recognition.
The CT images used in this study first underwent pixel-by-pixel binarization. After this morphological processing, large components in the binarized images are handled as ROIs. The center of the ROI is used to calculate the discrete center of gravity of the ROI for radial scanning. Due to the binary nature of the image, this center may be easily located without any weight.
The modified Canny edge recognition approach [58] is first applied to the ROI in each image to extract the appropriate form attributes. This extraction is made possible via the use of the improved Canny edge detector approach (one for each ROI within each image). The Canny operator employs a multi-step process to identify the edge pixels of an object. The first step is to adjust the area boundaries by using a Gaussian filter. After that, a regular 2D first derivative operator is used to compute edge strength. Pixels that are not a component of the local maximum are zeroed out when the non-maximum suppression method scans the region in the gradient direction. Lastly, a threshold is employed in order to determine the correct edge pixels. Therefore, each ROI may be represented by its own border curve.
It is essential to streamline the edges for ROI representation while extracting image features. The aim of the region boundary simplification stage is to create a smooth curve while minimizing the number of line segments used to delineate the area. This method is known as polygon approximation, and it is used to approximate a polygon curve that has a set number of vertices. The polygon curve approach looks for a subset of the initial vertices in order to minimize the objective function. The min-number problem is only one way to frame the issue. The appropriate approximation of an N-corner polygon curve is achieved by joining a certain number of straight-line M segments with another polygon curve. A common heuristic for finding a solution to the minimum number problem is the Douglas–Peucker (DP) method [59].
In this study, prior to using the Hough transform to extract features, the borders of ROIs are simplified using the Douglas–Peucker (DP) technique. The closeness of a vertex to an edge segment is a factor in the DP method. This approach operates top-down, beginning with a rough initial estimate on a simplified polygonal curve, or more specifically, on the single edge linking the first and end vertices of the polygonal curve. Then we determined the closeness of the remaining vertices to that edge. The corner that is furthest from the edge is added to the simplification if there are vertices further away from the edge than the provided tolerance ( ε > 0 ). As a result, the reduced polygonal curve receives a new estimate. Recursion is used to continue this process for simplification until all vertices of the original polygonal curve fall inside the tolerance.
If the ROI border is considered a closed curve, we must figure out the optimal distribution of all neighboring vertices, including the initial one. The simplest approach is to start from the vertex with the fewest errors. Compared to the open-curve procedure, this simple method for a curve with N corners is N times more complicated to implement. There are a number of options to consider when deciding where to set off. This research makes use of a heuristic technique inspired by Sato’s strategy [60]. The first step in this procedure is starting at the furthest location from the ROI’s spatial center.

2.2. 3D Nodule Segmentation

In this research, computer-assisted techniques were used to identify nodules. Automatic nodule recognition and segmentation is achieved using the union of the You Only Look Once, Version 3 (YOLOv3) [61] and iW-Net [62] architectures. The short version is that the model is fine-tuned to identify lung nodules by minimizing a loss function that considers breadth, height, and center of gravity of the estimate in comparison to the baseline. In order to take 3D information into account, the algorithm is trained using 3-channel images that consist of one axial slice comprising the nodule center as well as two equally spaced neighboring slices. Candidates are joined if their bounding boxes overlap, and estimates are calculated for each axis slice. Only the first block of iW-Net, which makes a segmentation prediction, is utilized for actual segmentation. We employ an image classification method to identify nodules with a bounding box in order to facilitate the use of temporal statistical classification with the series collected from the image. This image was achieved by manually creating these marks. Each image of interest has different dimensions according to the series methodology used in this research. After the series has been normalized, this variation has no bearing on the categorization.
The LIDC-IDRI database contains thoracic CT images with highly annotated lesions for the purpose of detecting lung cancer. The series acquisition approach for the automatically segmented nodule outlined how to find the nodule border by drawing a closed curve around each nodule wherever it was present, beginning at the first pixel outside the lesion. CT scan findings are recorded in an XML file connected with each participant. Nodules in each XML file are grouped into one of three sizes based on their diameter. The locations of the nodules and their z coordinates are included in the data. With these coordinates, we were able to generate a box and mask in three dimensions that were centered on the annotated lung nodule sites and were a fixed size. Our experimental boxes are 32 pixels square and 32 slices thick. Nodule boundary curves in the sections are scanned radially in 5625-degree increments to conform to the 3D volumetric data format. By using a thickness of 32 for the slices, we may encode the nodule’s border geometry as a matrix of type 32 × 32 .
Figure 2 depicts a 3D segmented nodule and the aforementioned shape matrix. In order to explain the methodology, we ran 2D radial scans with an angle increase of 2 degrees and applied Laplace smoothing to the Delaunay mesh that we had derived from the boundary points shown in Figure 2. Following the smoothing of the nodule surface, 180 z-axis steps were chosen.

2.3. Classification with U-Net

Two-dimensional conventional CNN designs typically layer-by-layer integrate raw input data with learnable filters. It may be built using several layers, each of which is trained to recognize a particular aspect of an image. Each training image is passed through a series of filters of increasing granularity, and the resulting convolutional image serves as input for the layer below it. An image filter may begin with basic characteristics such as brightness and edges and progress to more complicated characteristics that better characterize the item being filtered. This study proposes a technique that works well inside a deep learning framework using higher-order CNNs for effective feature learning of CT image data from unprocessed information. This is accomplished by stacking many convolutional layers in order to collect a wide variety of representative characteristics. By using convolutions and trainable filters with specific filter coefficients, we can link input and output neurons.
This paper provides a solution to the 4D input issue of jointly categorizing nodule volumetric radiomic and border information. For this challenge, we use a method centered on U-Net models that generalize 2D and 3D architectures [63,64]. As shown in Figure 3, we need to calculate the shape matrix obtained from radial scanning with a tensor that takes the coordinates in mm 3 units of each volume segmented in the 3D volume and the grayscale value in these coordinates in order to train our 4D U-Net model efficiently and use it in the classification process. The model makes use of the 4D data input that it generates collectively. Lower-order models need data reduction prior to network training. In contrast, our suggested architecture makes extensive use of higher-dimensional data while performing all operations on nominally sized datasets.
The value of a convolved output neuron at coordinates k , l in conventional 2D CNNs may be written as follows:
y k l = φ c C i n i = 0 H 1 j = 0 W 1 w i j x c k + i l + j b i j ,
where φ is the activation function, w i j is the value of the kernel connected to the current feature map at position i , j , x c k + i l + j is the value of the input neuron at input channel c , b i j is the bias of the computed feature map. Moreover, by following the extension method presented in [65], we can straightforwardly extend Equation (1) to 4D with
y k l m n = φ c C i n r = 0 R 1 d = 0 D 1 i = 0 H 1 j = 0 W 1 w i j d r x c i : i + k j : j + 1 d : d + m r : r + n + b i j d r .
With our deep pixel-level categorization, each pixel can only be assigned to one of C distinct categories. Because cross-entropy may be understood as the log-likelihood function of the training samples, it was chosen as the loss function to transform the network’s outputs back into probabilities. Training our models with this loss function combines the S o f t M a x activation with the cross-entropy loss to provide a probability across the C possible classes for each pixel.

3. Results

Overall, for this study, 244,559 images and 1018 CT scans from 1010 patients were provided by the Lung Image Database Consortium (LIDC) [66]. The five categories used to classify lesions in the LIDC image collection regarding pulmonary nodules are: highly likely to be benign (level 1); moderate probability of being benign (level 2); uncertain probability (level 3); moderate probability of malignancy (level 4); it is likely to be malignant (level 5). Due to the absence of a database structure, radiologists have not yet established relationships between images, examinations, and the possibility of malignancy from nodules, making the first LIDC image collection difficult to use. Thus, we choose to utilize the not only SQL (NoSQL) document-oriented Pulmonary Nodule Database (PND) [67] for our analysis.
The LIDC-IDRI study may be broken down into three major phases: image interpretation, nodule characteristic evaluation, and data recording. A radiologist was required to analyze each image of a CT examination using a computer interface and highlight lesions deemed to be nodules with in-plane dimensions between 3 and 30 mm, independent of assumed histology. As a result, these lesions may represent a primary lung cancer, a metastatic disease, a noncancerous condition, or of unknown etiology. Each nodule outline was intended to be a localizing “outside boundary” such that, according to the radiologist, the outline itself did not overlap nodule-specific pixels. According to the LIDC-IDRI literature, throughout the nodule characteristic evaluation procedure, each reader was requested to subjectively assign an integer value to nine distinct qualities. The data is stored in an eXtensible Markup Language (XML) file, and its classifications and Cartesian coordinates are based on nodule classifications. The XML file and all CT scans from a single test are kept in a folder, and all folders from all examinations were uploaded to a web server hosted on the website of the Cancer Imaging Archive (TCIA) [68]. In order to avoid unnecessary scans, PND only uses the radiologist’s annotations that identify the most lesions during each exam, which amounts to 752 scans and 1944 lung nodules. To normalize the image contrast, a gray-scale lung windowing was applied by adjusting the window/level from 1600 to −600 Hounsfield units.
Nodules, which may be up to 30 mm in diameter, are a kind of lung opacity [69]. Initially, we computed the nodule size as a straightforward 2D measure of the biggest diameter in a slice, which may be done in the axial plane along the axis of the longest diameter [70]. To get these rough estimates, we measured the x and y minimum and maximum coordinates of every nodule slice. According to [71], lung nodules with a PND malignancy grade of 3 were considered too dangerous to keep. We did not include any nodules in the LIDC collection that were annotated as non-solid because of the form complexity and low density of these objects. Therefore, following this phase, 897 nodules ranging in size from 3 mm to 30 mm remained (616 benign and 281 malignant). We were restricted from selecting smaller lesions due to the LIDC requirement of a 3 mm subthreshold.
A major restriction in this study was that the dataset has an uneven distribution of classes throughout its 897 nodules. During the phase of cross-validation training, the well-known Synthetic Minority Oversampling Approach (SMOTE) [72] method was used to develop synthetic nodule samples. This approach is also known as the synthetic minority oversampling approach. The method was developed with the intention of delivering a comprehensive and well-rounded approach. At each step of the process of cross-validation, nine folds were chosen to form the training set, whereas the remaining fold was used to form the test set. Moreover, we made sure that the appropriate proportions were preserved. Training sets comprised 550 benign nodules and 252 malignant nodules. Around 298 synthetic samples are produced by the SMOTE algorithm throughout each step of the procedure. This ensures that malignant nodules are represented as precisely as possible.
To assess the performance of the developed model, we employ a number of machine learning metrics, as the problem at hand is fundamentally a pixel-level multi-class classification task. True positives (TP), false positives (FP), false negatives (FN), and true negatives (TN) are the four possible outcomes when comparing a pixel’s prediction to its baseline accuracy score. True and False represent equality between the ground truth label and the predicted label, whereas Positive and Negative correspond to the class from which the metric is being calculated. In this study, common ML metrics are employed for each type of data using the above definitions. Namely, the metrics are:
Recall = TP / TP + FN
Precision = TP / TP + FP
Accuracy = TP + TN / TP + FP + FN + TN
F 1 = 2 × Precision × Recall / Precision + Recall
The number of filters utilized for effective feature learning and the number of stack-layers in the proposed U-Net model are two major hyper-parameters that have a substantial impact on the model’s performance. In order to determine which combination of hyper-parameters produces the best results, we conducted an ablation study.
In the experiments, the effect of increasing the number of stack levels on the performance of the U-Net is analyzed. We trained two separate 4D U-Net models, one with a depth of 3 and the other with a depth of 4. Table 1 shows that the network’s generalization capacity increases when more filters are applied, suggesting that the network is becoming more robust. Naturally, the time needed to train the network grows in proportion to the number of filters, as each filter has its own set of parameters that must be learned. We also find that using only four filters in the U-Net, as opposed to eight, improves performance across the board when the depth is increased from three. Overall, the best U-Net model can be trained in around 11 h, has a depth of 3, and has a classification accuracy of 92.84%.
Table 2 summarizes and tabulates comparisons between our proposed method and state-of-the-art lung nodule classification methods. The results of our evaluations show that our proposed method consistently outperforms the state-of-the-art methods. Not only that, but it outperforms other non-local-based methods such as Local-Global [52], 3D Directed Partitioning Networks (DPNs) [53] and 3D Axial-Attention [54].

4. Discussion and Conclusions

Because lung nodules are so minuscule that they can easily blend in with the surrounding tissue and cling to complicated anatomical systems like the pleura, this work presents a deep learning strategy that additionally deals with volumetric radiomic information for classifying nodules in the lungs.
We started by obtaining 3-tensor data types representing gray levels of 3D nodule shape modeled from cross-sectional CT scans. Grayscale values between 0 and 255 are fed to this tensor at each node. Our study presents a deep learning classification solution to the age-old issue of picture classification by including the series collected from nodule segments. Our method takes into consideration the self-similarity of the boundary curves that characterize the nodule segments in order to provide a more precise categorization of nodules. By treating the series of the border curvatures of each section as rows in the matrix, we are able to solve the 4D classification issue.
For this research, we accessed a dataset hosted by LIDC. Over 95% accuracy was achieved when using the deep learning algorithms YOLOv3 and iW-Net to identify and isolate the nodules in the annotated photos. The respective photos were manually cropped and recorded in this LIDC dataset using tags. After that, we employed the image processing techniques described in the methodology section to locate the nodule’s outside and innermost curves. The use of 32 × 32 -type matrices, the scanning at 5.625 degrees and 32 section steps yielded shape matrices that were consistent with the volumetric radiomics of the nodule.
The research used a U-Net-type convolutional neural network, which proved to be successful for the 4D categorization in previous studies [73,74,75,76]. Experiments were conducted using 4 and 8-filter meshes of depths 3 and 4, respectively. When compared to other networks, the one with three depths and eight filters performed quite well (92.84% accuracy). This outcome informed the selection of the network design from our study.
In the context of volumetric radiomics, comparisons were made between the results of this research and 3D CNN networks. The provided method yields superior performance results as compared to numerous non-native solutions presently available. The method presented in our research is most comparable to the 3D Axial Attention among the non-local approaches. Due to the fact that it takes into consideration the nodule’s 3D shape, the 3D Axial Attention approach is far more discriminating than earlier methods. However, the approach we present takes into account both the 3D geometry and the shape of the nodule, allowing for a 5D convolution. Although its performance is comparable to that of the 3D Axial-Attention approach, its outcomes are superior to those of prior methods. Future research may try using more radiomic variables within the framework of the 3D Axial Attention approach in order to get even more discriminating findings.
Limitations in the study design are inevitable, as is the case with every investigation. The primary barrier is the dearth of trained radiologists and experts in computer-assisted segmentation. The issue of class imbalance in the dataset may be addressed in a number of ways, all of which need careful consideration. Because the series angles derived from the radial-scanning boundary curve of the nodule follow one another in time, we may argue that the series represents a time series. An up-and-coming area of study in the field of forecasting is the use of time-series characteristics for model selection and model averaging [77,78,79]. However, most current methods need human intervention to choose a suitable collection of features. In modern time-series analysis, the use of machine learning techniques for automatically extracting characteristics from time series is becoming more important. Hybrid networks that can deal with radiomic features utilizing 3D geometry classification and machine learning-based time-series feature extraction may be studied in the future. Because our research demonstrates the usefulness of radial scanning, particularly in the context of medical image processing and classification, we believe it will serve as a benchmark for future studies examining other medical imaging methods.
In conclusion, we show that series from lung imaging may be used to effectively characterize lung nodules, and that a shape matrix, aided by an area of interest curve, can be used to reliably ascertain whether or not a tumor is malignant. We tested our methods using a large dataset of lung nodule pictures that was made accessible to the public, and we compared the outcomes to those produced by established methods for classifying both still photos and video over time. The requirement for our study to be repeatable prompted us to conduct these comparisons. Our research indicates that radial scanning series may be a powerful asset in the identification and categorization of lung nodules.

Author Contributions

Conceptualization, M.A.B. and Ö.A.; Methodology, M.A.B. and Ö.A.; Software, M.A.B. and Ö.A.; Validation, M.A.B. and Ö.A.; Formal analysis, M.A.B. and Ö.A.; Investigation, M.A.B., L.M.B., Ö.A. and A.N.; Resources, M.A.B., L.M.B., Ö.A. and A.N.; Data curation, M.A.B. and Ö.A.; Writing—original draft, M.A.B., L.M.B., Ö.A. and A.N.; Writing—review and editing, M.A.B., L.M.B., Ö.A. and A.N.; Visualization, M.A.B. and Ö.A.; Project administration, M.A.B. and L.M.B.; Funding acquisition, A.N. All authors have read and agreed to the published version of the manuscript.

Funding

This study was conducted with financial support from the scientific research funds of the “1 Decembrie 1918” University of Alba Iulia, Romania.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rowland, J.H.; Bellizzi, K.M. Cancer survivors and survivorship research: A reflection on today’s successes and tomorrow’s challenges. Hematol. Oncol. Clin. N. Am. 2008, 22, 181–200. [Google Scholar] [CrossRef] [PubMed]
  2. Elmore, L.W.; Greer, S.F.; Daniels, E.C.; Saxe, C.C.; Melner, M.H.; Krawiec, G.M.; Phelps, W.C. Blueprint for cancer research: Critical gaps and opportunities. CA Cancer J. Clin. 2021, 71, 107–139. [Google Scholar] [CrossRef] [PubMed]
  3. Benning, L.; Peintner, A.; Peintner, L. Advances in and the applicability of machine learning-based screening and early detection approaches for cancer: A primer. Cancers 2022, 14, 623. [Google Scholar] [CrossRef] [PubMed]
  4. Nanavaty, P.; Alvarez, M.S.; Alberts, W.M. Lung cancer screening: Advantages, controversies, and applications. Cancer Control 2014, 21, 9–14. [Google Scholar] [CrossRef] [Green Version]
  5. Khawaja, A.; Bartholmai, B.J.; Rajagopalan, S.; Karwoski, R.A.; Varghese, C.; Maldonado, F.; Peikert, T. Do we need to see to believe?—Radiomics for lung nodule classification and lung cancer risk stratification. J. Thorac. Dis. 2020, 12, 3303. [Google Scholar] [CrossRef]
  6. Parekh, V.; Jacobs, M.A. Radiomics: A new application from established techniques. Expert Rev. Precis. Med. Drug Dev. 2016, 1, 207–226. [Google Scholar] [CrossRef] [Green Version]
  7. Oliveira, S.P.; Neto, P.C.; Fraga, J.; Montezuma, D.; Monteiro, A.; Monteiro, J.; Cardoso, J.S. CAD systems for colorectal cancer from WSI are still not ready for clinical acceptance. Sci. Rep. 2021, 11, 14358. [Google Scholar] [CrossRef]
  8. Yanase, J.; Triantaphyllou, E. A systematic survey of computer-aided diagnosis in medicine: Past and present developments. Expert Syst. Appl. 2019, 138, 112821. [Google Scholar] [CrossRef]
  9. Yan, Y.; Yao, X.J.; Wang, S.H.; Zhang, Y.D. A survey of computer-aided tumor diagnosis based on convolutional neural network. Biology 2021, 10, 1084. [Google Scholar] [CrossRef]
  10. Chambara, N.; Ying, M. The diagnostic efficiency of ultrasound computer-aided diagnosis in differentiating thyroid nodules: A systematic review and narrative synthesis. Cancers 2019, 11, 1759. [Google Scholar] [CrossRef]
  11. Ding, J.; Li, A.; Hu, Z.; Wang, L. Accurate pulmonary nodule detection in computed tomography images using deep convolutional neural networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2017; pp. 559–567. [Google Scholar]
  12. Wang, S.; Zhou, M.; Liu, Z.; Liu, Z.; Gu, D.; Zang, Y.; Tian, J. Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation. Med. Image Anal. 2017, 40, 172–183. [Google Scholar] [CrossRef] [PubMed]
  13. Wu, B.; Zhou, Z.; Wang, J.; Wang, Y. Joint learning for pulmonary nodule segmentation, attributes and malignancy prediction. In 2018 IEEE 15th International Symposium on Biomedical Imaging; Curran Associates: Washington, DC, USA, 2018; pp. 1109–1113. [Google Scholar]
  14. Shen, S.; Han, S.X.; Aberle, D.R.; Bui, A.A.; Hsu, W. An interpretable deep hierarchical semantic convolutional neural network for lung nodule malignancy classification. Expert Syst. Appl. 2019, 128, 84–95. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Ferreira-Junior, J.R.; Koenigkam-Santos, M.; Magalhaes Tenorio, A.P.; Faleiros, M.C.; Garcia Cipriano, F.E.; Fabro, A.T.; de Azevedo-Marques, P.M. CT-based radiomics for prediction of histologic subtype and metastatic disease in primary malignant lung neoplasms. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 163–172. [Google Scholar] [CrossRef] [PubMed]
  16. Firmino, M.; Angelo, G.; Morais, H.; Dantas, M.R.; Valentim, R. Computer-aided detection (CADe) and diagnosis (CADx) system for lung cancer with likelihood of malignancy. Biomed. Eng. OnLine 2016, 15, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Choy, G.; Khalilzadeh, O.; Michalski, M.; Do, S.; Samir, A.E.; Pianykh, O.S.; Dreyer, K.J. Current applications and future impact of machine learning in radiology. Radiology 2018, 288, 318. [Google Scholar] [CrossRef]
  18. Ferreira, J.R.; Oliveira, M.C.; de Azevedo-Marques, P.M. Characterization of pulmonary nodules based on features of margin sharpness and texture. J. Digit. Imaging 2018, 31, 451–463. [Google Scholar] [CrossRef]
  19. Dhara, A.K.; Mukhopadhyay, S.; Dutta, A.; Garg, M.; Khandelwal, N. A combination of shape and texture features for classification of pulmonary nodules in lung CT images. J. Digit. Imaging 2016, 29, 466–475. [Google Scholar] [CrossRef] [Green Version]
  20. Felix, A.; Oliveira, M.; Machado, A.; Raniery, J. Using 3D texture and margin sharpness features on classification of small pulmonary nodules. In 2016 29th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI); Curran Associates: Washington, DC, USA, 2016; pp. 394–400. [Google Scholar]
  21. Beig, N.; Khorrami, M.; Alilou, M.; Prasanna, P.; Braman, N.; Orooji, M.; Madabhushi, A. Perinodular and intranodular radiomic features on lung CT images distinguish adenocarcinomas from granulomas. Radiology 2019, 290, 783. [Google Scholar] [CrossRef]
  22. Uthoff, J.; Stephens, M.J.; Newell Jr, J.D.; Hoffman, E.A.; Larson, J.; Koehn, N.; Sieren, J.C. Machine learning approach for distinguishing malignant and benign lung nodules utilizing standardized perinodular parenchymal features from CT. Med. Phys. 2019, 46, 3207–3216. [Google Scholar] [CrossRef]
  23. Chen, G.; Xu, Z. Usage of intelligent medical aided diagnosis system under the deep convolutional neural network in lumbar disc herniation. Appl. Soft Comput. 2021, 111, 107674. [Google Scholar] [CrossRef]
  24. Bakheet, S.; Al-Hamadi, A. Computer-aided diagnosis of malignant melanoma using Gabor-based entropic features and multilevel neural networks. Diagnostics 2020, 10, 822. [Google Scholar] [CrossRef] [PubMed]
  25. Maqsood, S.; Damaševičius, R.; Maskeliūnas, R. TTCNN: A breast cancer detection and classification towards computer-aided diagnosis using digital mammography in early stages. Appl. Sci. 2022, 12, 3273. [Google Scholar] [CrossRef]
  26. Campanella, G.; Hanna, M.G.; Geneslaw, L.; Miraflor, A.; Werneck Krauss Silva, V.; Busam, K.J.; Fuchs, T.J. Clinical-grade computational pathology using weakly supervised deep learning on whole slide images. Nat. Med. 2019, 25, 1301–1309. [Google Scholar] [CrossRef]
  27. Chlebus, G.; Schenk, A.; Moltz, J.H.; van Ginneken, B.; Hahn, H.K.; Meine, H. Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing. Sci. Rep. 2018, 8, 15497. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. De Fauw, J.; Ledsam, J.R.; Romera-Paredes, B.; Nikolov, S.; Tomasev, N.; Blackwell, S.; Ronneberger, O. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 2018, 24, 1342–1350. [Google Scholar] [CrossRef]
  29. Dash, M.; Londhe, N.D.; Ghosh, S.; Semwal, A.; Sonawane, R.S. PsLSNet: Automated psoriasis skin lesion segmentation using modified U-Net-based fully convolutional network. Biomed. Signal Process. Control 2019, 52, 226–237. [Google Scholar] [CrossRef]
  30. Xie, F.; Yang, J.; Liu, J.; Jiang, Z.; Zheng, Y.; Wang, Y. Skin lesion segmentation using high-resolution convolutional neural network. Comput. Methods Programs Biomed. 2020, 186, 105241. [Google Scholar] [CrossRef]
  31. Abdoulaye, I.B.C.; Demir, Ö. Mamografi görüntülerinden kitle tespiti amacıyla öznitelik çıkarımı. In Ulusal Biyomedikal Cihaz Tasarımı ve Üretmi Sempozyumu; UBICTÜS: Istanbul, Turkey, 2017; Volume 1, pp. 33–36. [Google Scholar]
  32. Wang, P.; Hu, X.; Li, Y.; Liu, Q.; Zhu, X. Automatic cell nuclei segmentation and classification of breast cancer histopathology images. Signal Process. 2016, 122, 1–13. [Google Scholar] [CrossRef]
  33. Jiang, H.; Li, Z.; Li, S.; Zhou, F. An effective multi-classification method for NHL pathological images. In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC); Curran Associates: Washington, DC, USA, 2018; pp. 763–768. [Google Scholar]
  34. Muhammad, W.; Hart, G.R.; Nartowt, B.; Farrell, J.J.; Johung, K.; Liang, Y.; Deng, J. Pancreatic cancer prediction through an artificial neural network. Front. Artif. Intell. 2019, 2, 2. [Google Scholar] [CrossRef] [Green Version]
  35. Busnatu, Ș.; Niculescu, A.G.; Bolocan, A.; Petrescu, G.E.; Păduraru, D.N.; Năstasă, I.; Martins, H. Clinical applications of artificial intelligence—An updated overview. J. Clinic. Med. 2022, 11, 2265. [Google Scholar] [CrossRef]
  36. Hunter, B.; Hindocha, S.; Lee, R.W. The role of artificial intelligence in early cancer diagnosis. Cancers 2022, 14, 1524. [Google Scholar] [CrossRef] [PubMed]
  37. Iakovidis, D.K.; Tsevas, S.; Savelonas, M.A.; Papamichalis, G. Image analysis framework for infection monitoring. IEEE Trans. Biomed. Eng. 2012, 59, 1135–1144. [Google Scholar] [CrossRef] [PubMed]
  38. Baur, B.; Bozdag, S. A canonical correlation analysis-based dynamic Bayesian network prior to infer gene regulatory networks from multiple types of biological data. J. Comput. Biol. 2015, 22, 289–299. [Google Scholar] [CrossRef]
  39. Guo, S.; Jiang, Q.; Chen, L.; Guo, D. Gene regulatory network inference using PLS-based methods. BMC Bioinform. 2016, 17, 545. [Google Scholar] [CrossRef] [Green Version]
  40. Penfold, C.A.; Shifaz, A.; Brown, P.E.; Nicholson, A.; Wild, D.L. CSI: A nonparametric Bayesian approach to network inference from multiple perturbed time series gene expression data. Stat. Appl. Genet. Mol. Biol. 2015, 14, 307–310. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Isci, S.; Dogan, H.; Ozturk, C.; Otu, H.H. Bayesian network prior: Network analysis of biological data using external knowledge. Bioinformatics 2014, 30, 860–867. [Google Scholar] [CrossRef] [Green Version]
  42. Schlitt, T. Approaches to modeling gene regulatory networks: A gentle introduction. Methods Mol. Biol. 2013, 1021, 13–35. [Google Scholar]
  43. Murphy, K.; Mian, S. Modelling Gene Expression Data Using Dynamic Bayesian Networks; Technical Report; Computer Science Division, University of California: Berkeley, CA, USA, 1999. [Google Scholar]
  44. Ni, Y.; Müller, P.; Wei, L.; Ji, Y. Bayesian graphical models for computational network biology. BMC Bioinform. 2018, 19, 59–69. [Google Scholar] [CrossRef]
  45. Kim, S.Y.; Imoto, S.; Miyano, S. Inferring gene networks from time series microarray data using dynamic Bayesian networks. Brief. Bioinform. 2003, 4, 228–235. [Google Scholar] [CrossRef] [Green Version]
  46. Kourou, K.; Rigas, G.; Papaloukas, C.; Mitsis, M.; Fotiadis, D.I. Cancer classification from time series microarray data through regulatory dynamic Bayesian networks. Comput. Biol. Med. 2020, 116, 103577. [Google Scholar] [CrossRef]
  47. Imani, F.; Daoud, M.; Moradi, M.; Abolmaesumi, P.; Mousavi, P. Tissue classification using depth-dependent ultrasound time series analysis: In-vitro animal study. In Medical Imaging 2011: Ultrasonic Imaging, Tomography, and Therapy; SPIE Medical Imaging: Lake Buena Vista, FL, USA, 2011; Volume 7968, pp. 120–126. [Google Scholar]
  48. Shen, W.; Zhou, M.; Yang, F.; Yu, D.; Dong, D.; Yang, C.; Tian, J. Multi-crop convolutional neural networks for lung nodule malignancy suspiciousness classification. Pattern Recognit. 2017, 61, 663–673. [Google Scholar] [CrossRef]
  49. Al-Shabi, M.; Lee, H.K.; Tan, M. Gated-dilated networks for lung nodule classification in CT scans. IEEE Access 2019, 7, 178827–178838. [Google Scholar] [CrossRef]
  50. Ren, Y.; Tsai, M.Y.; Chen, L.; Wang, J.; Li, S.; Liu, Y.; Shen, C. A manifold learning regularization approach to enhance 3D CT image-based lung nodule classification. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 287–295. [Google Scholar] [CrossRef] [PubMed]
  51. Shen, C.; Tsai, M.Y.; Chen, L.; Li, S.; Nguyen, D.; Wang, J.; Jia, X. On the robustness of deep learning-based lung-nodule classification for CT images with respect to image noise. Phys. Med. Biol. 2020, 65, 245037. [Google Scholar] [CrossRef] [PubMed]
  52. Jiang, H.; Gao, F.; Xu, X.; Huang, F.; Zhu, S. Attentive and ensemble 3D dual path networks for pulmonary nodules classification. Neurocomputing 2020, 398, 422–430. [Google Scholar] [CrossRef]
  53. Al-Shabi, M.; Lan, B.L.; Chan, W.Y.; Ng, K.H.; Tan, M. Lung nodule classification using deep local-global networks. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 1815–1819. [Google Scholar] [CrossRef] [Green Version]
  54. Al-Shabi, M.; Shak, K.; Tan, M. 3D axial-attention for lung nodule classification. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 1319–1324. [Google Scholar] [CrossRef]
  55. Bosch, C.M.; Baumann, C.; Dehghani, S.; Sommersperger, M.; Johannigmann-Malek, N.; Kirchmair, K.; Nasseri, M.A. A tool for high-resolution volumetric optical coherence tomography by compounding radial-and linear acquired B-scans using registration. Sensors 2022, 22, 1135. [Google Scholar] [CrossRef]
  56. Murad, M.; Jalil, A.; Bilal, M.; Ikram, S.; Ali, A.; Khan, B.; Mehmood, K. Radial undersampling-based interpolation scheme for multislice CSMRI reconstruction techniques. BioMed Res. Int. 2021, 2021, 6638588. [Google Scholar] [CrossRef]
  57. Mendoza, L.; Christopher, M.; Brye, N.; Proudfoot, J.A.; Belghith, A.; Bowd, C.; Zangwill, L.M. Deep learning predicts demographic and clinical characteristics from optic nerve head OCT circle and radial scans. Investig. Ophthalmol. Vis. Sci. 2021, 62, 2120. [Google Scholar]
  58. Deng, C.X.; Wang, G.B.; Yang, X.R. Image edge detection algorithm based on improved canny operator. In 2013 International Conference on Wavelet Analysis and Pattern Recognition; Curran Associates: Washington, DC, USA, 2013; pp. 168–172. [Google Scholar]
  59. Douglas, D.H.; Peucker, T.K. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartogr. Int. J. Geograph. Inf. Geovisualization 1973, 10, 112–122. [Google Scholar] [CrossRef] [Green Version]
  60. Sato, Y. Piecewise linear approximation of plane curves by perimeter optimization. Pattern Recognit. 1992, 25, 1535–1543. [Google Scholar] [CrossRef]
  61. Aresta, G.; Jacobs, C.; Araújo, T.; Cunha, A.; Ramos, I.; van Ginneken, B.; Campilho, A. iW-Net: An automatic and minimalistic interactive lung nodule segmentation deep network. Sci. Rep. 2019, 9, 11591. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  62. Aresta, G.; Araújo, T.; Jacobs, C.; Ginneken, B.V.; Cunha, A.; Ramos, I.; Campilho, A. Towards an automatic lung cancer screening system in low dose computed tomography. In Image Analysis for Moving Organ, Breast, and Thoracic Images; Springer: Cham, Switzerland, 2018; pp. 310–318. [Google Scholar]
  63. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  64. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2016; pp. 424–432. [Google Scholar]
  65. Giannopoulos, M.; Tsagkatakis, G.; Tsakalides, P. 4D U-nets for multi-temporal remote sensing data classification. Remote Sens. 2022, 14, 634. [Google Scholar] [CrossRef]
  66. Armato III, S.G.; McLennan, G.; Bidaut, L.; McNitt-Gray, M.F.; Meyer, C.R.; Reeves, A.P.; Clarke, L.P. The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans. Med. Phys. 2011, 38, 915–931. [Google Scholar] [CrossRef] [Green Version]
  67. Ferreira Junior, J.R.; Oliveira, M.C.; de Azevedo-Marques, P.M. Cloud-based NoSQL open database of pulmonary nodules for computer-aided lung cancer diagnosis and reproducible research. J. Digit. Imaging 2016, 29, 716–729. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  68. Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Prior, F. The Cancer Imaging Archive (TCIA): Maintaining and operating a public information repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [Green Version]
  69. Wormanns, D.; Hamer, O.W. Glossary of terms for thoracic imaging-German version of the Fleischner Society recommendations. RoFo 2015, 187, 638–661. [Google Scholar]
  70. Calheiros, J.L.L.; de Amorim, L.B.V.; de Lima, L.L.; de Lima Filho, A.F.; Ferreira Júnior, J.R.; de Oliveira, M.C. The effects of perinodular features on solid lung nodule classification. J. Digit. Imaging 2021, 34, 798–810. [Google Scholar] [CrossRef]
  71. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  72. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  73. Huang, X.; Zhang, Y.; Chen, L.; Wang, J. U-net-based deformation vector field estimation for motion-compensated 4D-CBCT reconstruction. Med. Phys. 2020, 47, 3000–3012. [Google Scholar] [CrossRef] [PubMed]
  74. Chen, G.; Zhao, Y.; Huang, Q.; Gao, H. 4D-AirNet: A temporally-resolved CBCT slice reconstruction method synergizing analytical and iterative method with deep learning. Phys. Med. Biol. 2020, 65, 175020. [Google Scholar] [CrossRef] [PubMed]
  75. Choy, C.; Gwak, J.; Savarese, S. 4D spatio-temporal convnets: Minkowski convolutional neural networks. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. 2019, 3075–3084. [Google Scholar]
  76. Liu, T.; Meng, Q.; Huang, J.J.; Vlontzos, A.; Rueckert, D.; Kainz, B. Video summarization through reinforcement learning with a 3D spatio-temporal U-net. IEEE Trans. Image Process. 2022, 31, 1573–1586. [Google Scholar] [CrossRef] [PubMed]
  77. Ismail Fawaz, H.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef] [Green Version]
  78. Abanda, A.; Mori, U.; Lozano, J.A. A review on distance-based time series classification. Data Min. Knowl. Discov. 2019, 33, 378–412. [Google Scholar] [CrossRef] [Green Version]
  79. Iwana, B.K.; Uchida, S. An empirical survey of data augmentation for time series classification with neural networks. PLoS ONE 2021, 16, e0254841. [Google Scholar] [CrossRef]
Figure 1. Method of obtaining series by radial scanning.
Figure 1. Method of obtaining series by radial scanning.
Cancers 15 00843 g001
Figure 2. (a) 3D segmented and smoothed nodule and (b) a 180 × 180 matrix encoding the boundary shape of each slice.
Figure 2. (a) 3D segmented and smoothed nodule and (b) a 180 × 180 matrix encoding the boundary shape of each slice.
Cancers 15 00843 g002
Figure 3. U-Net Architecture.
Figure 3. U-Net Architecture.
Cancers 15 00843 g003
Table 1. Metrics for classification and training times (in minutes) for 4D U-Net models.
Table 1. Metrics for classification and training times (in minutes) for 4D U-Net models.
DepthNo. of FiltersRecallPrecisionAccuracyTime
3480.1381.5483.45469.92
892.4192.6392.84661.8
4480.0479.6381.22477.74
887.1988.0188.73668.4
Table 2. The proposed method’s performance compared to the state-of-the-art methods.
Table 2. The proposed method’s performance compared to the state-of-the-art methods.
MethodAUCRecallPrecisionAccuracyF1
HSCNN [14]85.670.5N/A84.2N/A
Multi-Crop [48]93.077.0N/A87.14N/A
Local-Global [52]95.6288.6687.3888.4688.01
Gated-Dilated [49]95.1492.2191.8592.5792.03
3D DPN [53]N/A92.04N/A90.24N/A
MRC-DNN [50]N/A81.00N/A90.00N/A
Perturbated DNN [51]91.090.0N/A83.0N/A
3D Axial-Attention [54]96.1792.3692.5992.8192.47
Our method96.1992.4192.6392.8492.51
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Balcı, M.A.; Batrancea, L.M.; Akgüller, Ö.; Nichita, A. A Series-Based Deep Learning Approach to Lung Nodule Image Classification. Cancers 2023, 15, 843. https://doi.org/10.3390/cancers15030843

AMA Style

Balcı MA, Batrancea LM, Akgüller Ö, Nichita A. A Series-Based Deep Learning Approach to Lung Nodule Image Classification. Cancers. 2023; 15(3):843. https://doi.org/10.3390/cancers15030843

Chicago/Turabian Style

Balcı, Mehmet Ali, Larissa M. Batrancea, Ömer Akgüller, and Anca Nichita. 2023. "A Series-Based Deep Learning Approach to Lung Nodule Image Classification" Cancers 15, no. 3: 843. https://doi.org/10.3390/cancers15030843

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop