applsci-logo

Journal Browser

Journal Browser

Image Processing and Analysis for Preclinical and Clinical Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 55392

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
Interests: non-invasive imaging techniques: positron emission tomography (PET), computerized tomography (CT), and magnetic resonance (MR); radiomics and artificial intelligence in clinical health care applications; processing, quantification, and correction methods for ex vivo and in vivo medical images
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Ri.MED Foundation, via Bandiera 11, 90133 Palermo, Italy
2. Research Affiliate Long Term, Laboratory of Computational Computer Vision (LCCV), School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
Interests: biomedical image processing and analysis; radiomics; artificial intelligence; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Biomedicine, Neuroscience and Advanced Diagnostics (BIND), University Hospital of Palermo, Via del Vespro 129, 90127 Palermo, Italy
Interests: liver imaging; pancreatic imaging; hepatocellular carcinoma; radiomics; texture analysis; diffuse liver diseases; emergency radiology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Preclinical and clinical imaging aims to characterize and measure biological processes and diseases in animals and humans. In recent years, there has been growing interest in the quantitative analysis of clinical images using techniques such as Positron Emission Tomography, Computerized Tomography, and Magnetic Resonance Imaging, mainly applied to texture analysis and radiomics. In particular, various image processing and analysis algorithms based on pattern recognition, artificial intelligence, and computer graphics methods have been proposed to extract features from biomedical images. These quantitative approaches are expected to have a positive clinical impact on quantitatively analyzing images, reveal biological processes and diseases, and predict response to treatment.

This Special Issue, entitled “Image Processing and Analysis for Preclinical and Clinical Applications”, will present a collection of high-quality studies covering the state-of-the-art and innovative approaches focusing on image processing and analysis across a variety of imaging modalities as well as the expected clinical applicability of these innovative approaches for personalized patient-tailored medicine.

Dr. Alessandro Stefano
Dr. Albert Comelli
Dr. Federica Vernuccio
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • in-vivo imaging
  • therapy response prediction
  • medical diagnosis support systems
  • detection, segmentation, and classification of tissues
  • biomedical image analysis and processing
  • personalized medicine
  • artificial intelligence
  • texture analysis
  • radiomics

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

6 pages, 219 KiB  
Editorial
Image Processing and Analysis for Preclinical and Clinical Applications
by Alessandro Stefano, Federica Vernuccio and Albert Comelli
Appl. Sci. 2022, 12(15), 7513; https://doi.org/10.3390/app12157513 - 26 Jul 2022
Viewed by 1172
Abstract
Preclinical and clinical imaging aims to characterize and measure biological processes and diseases in animals [...] Full article

Research

Jump to: Editorial

24 pages, 34213 KiB  
Article
Hardware Optimizations of the X-ray Pre-Processing for Interventional Computed Tomography Using the FPGA
by Daniele Passaretti, Mukesh Ghosh, Shiras Abdurahman, Micaela Lambru Egito and Thilo Pionteck
Appl. Sci. 2022, 12(11), 5659; https://doi.org/10.3390/app12115659 - 2 Jun 2022
Cited by 4 | Viewed by 4527
Abstract
In computed tomography imaging, the computationally intensive tasks are the pre-processing of 2D detector data to generate total attenuation or line integral projections and the reconstruction of the 3D volume from the projections. This paper proposes the optimization of the X-ray pre-processing to [...] Read more.
In computed tomography imaging, the computationally intensive tasks are the pre-processing of 2D detector data to generate total attenuation or line integral projections and the reconstruction of the 3D volume from the projections. This paper proposes the optimization of the X-ray pre-processing to compute total attenuation projections by avoiding the intermediate step to convert detector data to intensity images. In addition, to fulfill the real-time requirements, we design a configurable hardware architecture for data acquisition systems on FPGAs, with the goal to have a “on-the-fly” pre-processing of 2D projections. Finally, this architecture was configured for exploring and analyzing different arithmetic representations, such as floating-point and fixed-point data formats. This design space exploration has allowed us to find the best representation and data format that minimize execution time and hardware costs, while not affecting image quality. Furthermore, the proposed architecture was integrated in an open-interface computed tomography device, used for evaluating the image quality of the pre-processed 2D projections and the reconstructed 3D volume. By comparing the proposed solution with the state-of-the-art pre-processing algorithm that make use of intensity images, the latency was decreased 4.125×, and the resources utilization of ∼6.5×, with a mean square error in the order of 1015 for all the selected phantom experiments. Finally, by using the fixed-point representation in the different data precisions, the latency and the resource utilization were further decreased, and a mean square error in the order of 101 was reached. Full article
Show Figures

Figure 1

14 pages, 1949 KiB  
Article
Artificial Intelligence Applications on Restaging [18F]FDG PET/CT in Metastatic Colorectal Cancer: A Preliminary Report of Morpho-Functional Radiomics Classification for Prediction of Disease Outcome
by Pierpaolo Alongi, Alessandro Stefano, Albert Comelli, Alessandro Spataro, Giuseppe Formica, Riccardo Laudicella, Helena Lanzafame, Francesco Panasiti, Costanza Longo, Federico Midiri, Viviana Benfante, Ludovico La Grutta, Irene Andrea Burger, Tommaso Vincenzo Bartolotta, Sergio Baldari, Roberto Lagalla, Massimo Midiri and Giorgio Russo
Appl. Sci. 2022, 12(6), 2941; https://doi.org/10.3390/app12062941 - 13 Mar 2022
Cited by 11 | Viewed by 3168
Abstract
The aim of this study was to investigate the application of [18F]FDG PET/CT images-based textural features analysis to propose radiomics models able to early predict disease progression (PD) and survival outcome in metastatic colorectal cancer (MCC) patients after first adjuvant therapy. [...] Read more.
The aim of this study was to investigate the application of [18F]FDG PET/CT images-based textural features analysis to propose radiomics models able to early predict disease progression (PD) and survival outcome in metastatic colorectal cancer (MCC) patients after first adjuvant therapy. For this purpose, 52 MCC patients who underwent [18F]FDGPET/CT during the disease restaging process after the first adjuvant therapy were analyzed. Follow-up data were recorded for a minimum of 12 months after PET/CT. Radiomics features from each avid lesion in PET and low-dose CT images were extracted. A hybrid descriptive-inferential method and the discriminant analysis (DA) were used for feature selection and for predictive model implementation, respectively. The performance of the features in predicting PD was performed for per-lesion analysis, per-patient analysis, and liver lesions analysis. All lesions were again considered to assess the diagnostic performance of the features in discriminating liver lesions. In predicting PD in the whole group of patients, on PET features radiomics analysis, among per-lesion analysis, only the GLZLM_GLNU feature was selected, while three features were selected from PET/CT images data set. The same features resulted more accurately by associating CT features with PET features (AUROC 65.22%). In per-patient analysis, three features for stand-alone PET images and one feature (i.e., HUKurtosis) for the PET/CT data set were selected. Focusing on liver metastasis, in per-lesion analysis, the same analysis recognized one PET feature (GLZLM_GLNU) from PET images and three features from PET/CT data set. Similarly, in liver lesions per-patient analysis, we found three PET features and a PET/CT feature (HUKurtosis). In discrimination of liver metastasis from the rest of the other lesions, optimal results of stand-alone PET imaging were found for one feature (SUVbwmin; AUROC 88.91%) and two features for merged PET/CT features analysis (AUROC 95.33%). In conclusion, our machine learning model on restaging [18F]FDGPET/CT was demonstrated to be feasible and potentially useful in the predictive evaluation of disease progression in MCC. Full article
Show Figures

Figure 1

16 pages, 2611 KiB  
Article
Does a Previous Segmentation Improve the Automatic Detection of Basal Cell Carcinoma Using Deep Neural Networks?
by Paulina Vélez, Manuel Miranda, Carmen Serrano and Begoña Acha
Appl. Sci. 2022, 12(4), 2092; https://doi.org/10.3390/app12042092 - 17 Feb 2022
Cited by 4 | Viewed by 2045
Abstract
Basal Cell Carcinoma (BCC) is the most frequent skin cancer and its increasing incidence is producing a high overload in dermatology services. In this sense, it is convenient to aid physicians in detecting it soon. Thus, in this paper, we propose a tool [...] Read more.
Basal Cell Carcinoma (BCC) is the most frequent skin cancer and its increasing incidence is producing a high overload in dermatology services. In this sense, it is convenient to aid physicians in detecting it soon. Thus, in this paper, we propose a tool for the detection of BCC to provide a prioritization in the teledermatology consultation. Firstly, we analyze if a previous segmentation of the lesion improves the ulterior classification of the lesion. Secondly, we analyze three deep neural networks and ensemble architectures to distinguish between BCC and nevus, and BCC and other skin lesions. The best segmentation results are obtained with a SegNet deep neural network. A 98% accuracy for distinguishing BCC from nevus and a 95% accuracy classifying BCC vs. all lesions have been obtained. The proposed algorithm outperforms the winner of the challenge ISIC 2019 in almost all the metrics. Finally, we can conclude that when deep neural networks are used to classify, a previous segmentation of the lesion does not improve the classification results. Likewise, the ensemble of different neural network configurations improves the classification performance compared with individual neural network classifiers. Regarding the segmentation step, supervised deep learning-based methods outperform unsupervised ones. Full article
Show Figures

Figure 1

12 pages, 2212 KiB  
Article
Deep Learning Networks for Automatic Retroperitoneal Sarcoma Segmentation in Computerized Tomography
by Giuseppe Salvaggio, Giuseppe Cutaia, Antonio Greco, Mario Pace, Leonardo Salvaggio, Federica Vernuccio, Roberto Cannella, Laura Algeri, Lorena Incorvaia, Alessandro Stefano, Massimo Galia, Giuseppe Badalamenti and Albert Comelli
Appl. Sci. 2022, 12(3), 1665; https://doi.org/10.3390/app12031665 - 5 Feb 2022
Cited by 11 | Viewed by 2397
Abstract
The volume estimation of retroperitoneal sarcoma (RPS) is often difficult due to its huge dimensions and irregular shape; thus, it often requires manual segmentation, which is time-consuming and operator-dependent. This study aimed to evaluate two fully automated deep learning networks (ENet and ERFNet) [...] Read more.
The volume estimation of retroperitoneal sarcoma (RPS) is often difficult due to its huge dimensions and irregular shape; thus, it often requires manual segmentation, which is time-consuming and operator-dependent. This study aimed to evaluate two fully automated deep learning networks (ENet and ERFNet) for RPS segmentation. This retrospective study included 20 patients with RPS who received an abdominal computed tomography (CT) examination. Forty-nine CT examinations, with a total of 72 lesions, were included. Manual segmentation was performed by two radiologists in consensus, and automatic segmentation was performed using ENet and ERFNet. Significant differences between manual and automatic segmentation were tested using the analysis of variance (ANOVA). A set of performance indicators for the shape comparison (namely sensitivity), positive predictive value (PPV), dice similarity coefficient (DSC), volume overlap error (VOE), and volumetric differences (VD) were calculated. There were no significant differences found between the RPS volumes obtained using manual segmentation and ENet (p-value = 0.935), manual segmentation and ERFNet (p-value = 0.544), or ENet and ERFNet (p-value = 0.119). The sensitivity, PPV, DSC, VOE, and VD for ENet and ERFNet were 91.54% and 72.21%, 89.85% and 87.00%, 90.52% and 74.85%, 16.87% and 36.85%, and 2.11% and −14.80%, respectively. By using a dedicated GPU, ENet took around 15 s for segmentation versus 13 s for ERFNet. In the case of CPU, ENet took around 2 min versus 1 min for ERFNet. The manual approach required approximately one hour per segmentation. In conclusion, fully automatic deep learning networks are reliable methods for RPS volume assessment. ENet performs better than ERFNet for automatic segmentation, though it requires more time. Full article
Show Figures

Figure 1

16 pages, 17698 KiB  
Article
Estimation of the Prostate Volume from Abdominal Ultrasound Images by Image-Patch Voting
by Nur Banu Albayrak and Yusuf Sinan Akgul
Appl. Sci. 2022, 12(3), 1390; https://doi.org/10.3390/app12031390 - 27 Jan 2022
Cited by 3 | Viewed by 13603
Abstract
Estimation of the prostate volume with ultrasound offers many advantages such as portability, low cost, harmlessness, and suitability for real-time operation. Abdominal Ultrasound (AUS) is a practical procedure that deserves more attention in automated prostate-volume-estimation studies. As the experts usually consider automatic end-to-end [...] Read more.
Estimation of the prostate volume with ultrasound offers many advantages such as portability, low cost, harmlessness, and suitability for real-time operation. Abdominal Ultrasound (AUS) is a practical procedure that deserves more attention in automated prostate-volume-estimation studies. As the experts usually consider automatic end-to-end volume-estimation procedures as non-transparent and uninterpretable systems, we proposed an expert-in-the-loop automatic system that follows the classical prostate-volume-estimation procedures. Our system directly estimates the diameter parameters of the standard ellipsoid formula to produce the prostate volume. To obtain the diameters, our system detects four diameter endpoints from the transverse and two diameter endpoints from the sagittal AUS images as defined by the classical procedure. These endpoints are estimated using a new image-patch voting method to address characteristic problems of AUS images. We formed a novel prostate AUS data set from 305 patients with both transverse and sagittal planes. The data set includes MRI images for 75 of these patients. At least one expert manually marked all the data. Extensive experiments performed on this data set showed that the proposed system results ranged among experts’ volume estimations, and our system can be used in clinical practice. Full article
Show Figures

Figure 1

11 pages, 10404 KiB  
Article
Clinical Comparison of the Glomerular Filtration Rate Calculated from Different Renal Depths and Formulae
by Wen-Ling Hsu, Shu-Min Chang and Chin-Chuan Chang
Appl. Sci. 2022, 12(2), 698; https://doi.org/10.3390/app12020698 - 11 Jan 2022
Cited by 2 | Viewed by 2580
Abstract
A camera-based method using Technetium-99m diethylenetriaminepentaacetic acid (Tc-99m DTPA) is commonly used to calculate glomerular filtration rate (GFR), especially, as it can easily calculate split renal function. Renal depth is the main factor affecting the measurement of GFR accuracy. This study aimed to [...] Read more.
A camera-based method using Technetium-99m diethylenetriaminepentaacetic acid (Tc-99m DTPA) is commonly used to calculate glomerular filtration rate (GFR), especially, as it can easily calculate split renal function. Renal depth is the main factor affecting the measurement of GFR accuracy. This study aimed to compare the difference of renal depths between three formulae and a CT scan, and, additionally, to calculate the GFRs by four methods. We retrospectively reviewed the medical records of patients receiving a renal dynamic scan. All patients underwent a laboratory test within one month, and a computed tomography (CT) scan within two months, before or after the renal dynamic scan. The GFRs were calculated by employing a renal dynamic scan using renal depth measured in three formulae (Tonnesen’s, Itoh K’s, and Taylor’s), and a CT scan. The renal depths measured by the above four methods were compared, and the GFRs were compared to the modified estimated GFR (eGFR). Fifty-one patients were enrolled in the study. The mean modified eGFR was 60.5 ± 42.7 mL/min. The mean GFRs calculated by three formulae and CT were 45.3 ± 23.3, 54.7 ± 27.5, 56.5 ± 26.3, and 63.7 ± 30.0, respectively. All of them correlated well with the modified eGFR (r = 0.87, 0.87, 0.87, and 0.84, respectively). The Bland–Altman plot revealed good consistency between the calculated GFR by Tonnesen’s and the modified eGFR. The renal depths measured using the three formulae were smaller than those measured using the CT scan, and the right renal depth was always larger than the left. In patients with modified eGFR > 60 mL/min, the GFR calculated by CT was the closest to the modified eGFR. The Renal depth measured by CT scan is deeper than that using formula, and it influences the GFR calculated by Gate’s method. The GFR calculated by CT is more closely related to modified eGFR when modified eGFR > 60 mL/min. Full article
Show Figures

Figure 1

30 pages, 27656 KiB  
Article
Fundus Image Registration Technique Based on Local Feature of Retinal Vessels
by Roziana Ramli, Khairunnisa Hasikin, Mohd Yamani Idna Idris, Noor Khairiah A. Karim and Ainuddin Wahid Abdul Wahab
Appl. Sci. 2021, 11(23), 11201; https://doi.org/10.3390/app112311201 - 25 Nov 2021
Cited by 9 | Viewed by 2090
Abstract
Feature-based retinal fundus image registration (RIR) technique aligns fundus images according to geometrical transformations estimated between feature point correspondences. To ensure accurate registration, the feature points extracted must be from the retinal vessels and throughout the image. However, noises in the fundus image [...] Read more.
Feature-based retinal fundus image registration (RIR) technique aligns fundus images according to geometrical transformations estimated between feature point correspondences. To ensure accurate registration, the feature points extracted must be from the retinal vessels and throughout the image. However, noises in the fundus image may resemble retinal vessels in local patches. Therefore, this paper introduces a feature extraction method based on a local feature of retinal vessels (CURVE) that incorporates retinal vessels and noises characteristics to accurately extract feature points on retinal vessels and throughout the fundus image. The CURVE performance is tested on CHASE, DRIVE, HRF and STARE datasets and compared with six feature extraction methods used in the existing feature-based RIR techniques. From the experiment, the feature extraction accuracy of CURVE (86.021%) significantly outperformed the existing feature extraction methods (p ≤ 0.001*). Then, CURVE is paired with a scale-invariant feature transform (SIFT) descriptor to test its registration capability on the fundus image registration (FIRE) dataset. Overall, CURVE-SIFT successfully registered 44.030% of the image pairs while the existing feature-based RIR techniques (GDB-ICP, Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG) only registered less than 27.612% of the image pairs. The one-way ANOVA analysis showed that CURVE-SIFT significantly outperformed GDB-ICP (p = 0.007*), Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG (p ≤ 0.001*). Full article
Show Figures

Figure 1

14 pages, 3067 KiB  
Article
Robustness of PET Radiomics Features: Impact of Co-Registration with MRI
by Alessandro Stefano, Antonio Leal, Selene Richiusa, Phan Trang, Albert Comelli, Viviana Benfante, Sebastiano Cosentino, Maria G. Sabini, Antonino Tuttolomondo, Roberto Altieri, Francesco Certo, Giuseppe Maria Vincenzo Barbagallo, Massimo Ippolito and Giorgio Russo
Appl. Sci. 2021, 11(21), 10170; https://doi.org/10.3390/app112110170 - 30 Oct 2021
Cited by 34 | Viewed by 3039
Abstract
Radiomics holds great promise in the field of cancer management. However, the clinical application of radiomics has been hampered by uncertainty about the robustness of the features extracted from the images. Previous studies have reported that radiomics features are sensitive to changes in [...] Read more.
Radiomics holds great promise in the field of cancer management. However, the clinical application of radiomics has been hampered by uncertainty about the robustness of the features extracted from the images. Previous studies have reported that radiomics features are sensitive to changes in voxel size resampling and interpolation, image perturbation, or slice thickness. This study aims to observe the variability of positron emission tomography (PET) radiomics features under the impact of co-registration with magnetic resonance imaging (MRI) using the difference percentage coefficient, and the Spearman’s correlation coefficient for three groups of images: (i) original PET, (ii) PET after co-registration with T1-weighted MRI and (iii) PET after co-registration with FLAIR MRI. Specifically, seventeen patients with brain cancers undergoing [11C]-Methionine PET were considered. Successively, PET images were co-registered with MRI sequences and 107 features were extracted for each mentioned group of images. The variability analysis revealed that shape features, first-order features and two subgroups of higher-order features possessed a good robustness, unlike the remaining groups of features, which showed large differences in the difference percentage coefficient. Furthermore, using the Spearman’s correlation coefficient, approximately 40% of the selected features differed from the three mentioned groups of images. This is an important consideration for users conducting radiomics studies with image co-registration constraints to avoid errors in cancer diagnosis, prognosis, and clinical outcome prediction. Full article
Show Figures

Figure 1

18 pages, 689 KiB  
Article
Pathologic Complete Response Prediction after Neoadjuvant Chemoradiation Therapy for Rectal Cancer Using Radiomics and Deep Embedding Network of MRI
by Seunghyun Lee, Joonseok Lim, Jaeseung Shin, Sungwon Kim and Heasoo Hwang
Appl. Sci. 2021, 11(20), 9494; https://doi.org/10.3390/app11209494 - 13 Oct 2021
Cited by 3 | Viewed by 2369
Abstract
Assessment of magnetic resonance imaging (MRI) after neoadjuvant chemoradiation therapy (nCRT) is essential in rectal cancer staging and treatment planning. However, when predicting the pathologic complete response (pCR) after nCRT for rectal cancer, existing works either rely on simple quantitative evaluation based on [...] Read more.
Assessment of magnetic resonance imaging (MRI) after neoadjuvant chemoradiation therapy (nCRT) is essential in rectal cancer staging and treatment planning. However, when predicting the pathologic complete response (pCR) after nCRT for rectal cancer, existing works either rely on simple quantitative evaluation based on radiomics features or partially analyze multi-parametric MRI. We propose an effective pCR prediction method based on novel multi-parametric MRI embedding. We first seek to extract volumetric features of tumors that can be found only by analyzing multiple MRI sequences jointly. Specifically, we encapsulate multiple MRI sequences into multi-sequence fusion images (MSFI) and generate MSFI embedding. We merge radiomics features, which capture important characteristics of tumors, with MSFI embedding to generate multi-parametric MRI embedding and then use it to predict pCR using a random forest classifier. Our extensive experiments demonstrate that using all given MRI sequences is the most effective regardless of the dimension reduction method. The proposed method outperformed any variants with different combinations of feature vectors and dimension reduction methods or different classification models. Comparative experiments demonstrate that it outperformed four competing baselines in terms of the AUC and F1-score. We use MRI sequences from 912 patients with rectal cancer, a much larger sample than in any existing work. Full article
Show Figures

Figure 1

9 pages, 2933 KiB  
Article
ZFTool: A Software for Automatic Quantification of Cancer Cell Mass Evolution in Zebrafish
by María J. Carreira, Nicolás Vila-Blanco, Pablo Cabezas-Sainz and Laura Sánchez
Appl. Sci. 2021, 11(16), 7721; https://doi.org/10.3390/app11167721 - 22 Aug 2021
Cited by 3 | Viewed by 2219
Abstract
Background: Zebrafish (Danio rerio) is a model organism for the study of human cancer. Compared with the murine model, the zebrafish model has several properties ideal for personalized therapies. The transparency of the zebrafish embryos and the development of the pigment-deficient [...] Read more.
Background: Zebrafish (Danio rerio) is a model organism for the study of human cancer. Compared with the murine model, the zebrafish model has several properties ideal for personalized therapies. The transparency of the zebrafish embryos and the development of the pigment-deficient ”casper“ zebrafish line give the capacity to directly observe cancer formation and progression in the living animal. Automatic quantification of cellular proliferation in vivo is critical to the development of personalized medicine. Methods: A new methodology was defined to automatically quantify the cancer cellular evolution. ZFTool was developed to establish a base threshold that eliminates the embryo autofluorescence, automatically measures the area and intensity of GFP (green-fluorescent protein) marked cells, and defines a proliferation index. Results: The proliferation index automatically computed on different targets demonstrates the efficiency of ZFTool to provide a good automatic quantification of cancer cell evolution and dissemination. Conclusion: Our results demonstrate that ZFTool is a reliable tool for the automatic quantification of the proliferation index as a measure of cancer mass evolution in zebrafish, eliminating the influence of its autofluorescence. Full article
Show Figures

Figure 1

12 pages, 2590 KiB  
Article
Transfer Learning for an Automated Detection System of Fractures in Patients with Maxillofacial Trauma
by Maria Amodeo, Vincenzo Abbate, Pasquale Arpaia, Renato Cuocolo, Giovanni Dell’Aversana Orabona, Monica Murero, Marco Parvis, Roberto Prevete and Lorenzo Ugga
Appl. Sci. 2021, 11(14), 6293; https://doi.org/10.3390/app11146293 - 7 Jul 2021
Cited by 6 | Viewed by 3372
Abstract
An original maxillofacial fracture detection system (MFDS), based on convolutional neural networks and transfer learning, is proposed to detect traumatic fractures in patients. A convolutional neural network pre-trained on non-medical images was re-trained and fine-tuned using computed tomography (CT) scans to produce a [...] Read more.
An original maxillofacial fracture detection system (MFDS), based on convolutional neural networks and transfer learning, is proposed to detect traumatic fractures in patients. A convolutional neural network pre-trained on non-medical images was re-trained and fine-tuned using computed tomography (CT) scans to produce a model for the classification of future CTs as either “fracture” or “noFracture”. The model was trained on a total of 148 CTs (120 patients labeled with “fracture” and 28 patients labeled with “noFracture”). The validation dataset, used for statistical analysis, was characterized by 30 patients (5 with “noFracture” and 25 with “fracture”). An additional 30 CT scans, comprising 25 “fracture” and 5 “noFracture” images, were used as the test dataset for final testing. Tests were carried out both by considering the single slices and by grouping the slices for patients. A patient was categorized as fractured if two consecutive slices were classified with a fracture probability higher than 0.99. The patients’ results show that the model accuracy in classifying the maxillofacial fractures is 80%. Even if the MFDS model cannot replace the radiologist’s work, it can provide valuable assistive support, reducing the risk of human error, preventing patient harm by minimizing diagnostic delays, and reducing the incongruous burden of hospitalization. Full article
Show Figures

Figure 1

8 pages, 1238 KiB  
Article
Left Atrial Flow Stasis in Patients Undergoing Pulmonary Vein Isolation for Paroxysmal Atrial Fibrillation Using 4D-Flow Magnetic Resonance Imaging
by Hana Sheitt, Hansuk Kim, Stephen Wilton, James A White and Julio Garcia
Appl. Sci. 2021, 11(12), 5432; https://doi.org/10.3390/app11125432 - 11 Jun 2021
Cited by 4 | Viewed by 2299
Abstract
Atrial fibrillation (AF) is associated with systemic thrombo-embolism and stroke events, which do not appear significantly reduced following successful pulmonary vein (PV) ablation. Prior studies supported that thrombus formation is associated with left atrial (LA) flow alterations, particularly flow stasis. Recently, time-resolved three-dimensional [...] Read more.
Atrial fibrillation (AF) is associated with systemic thrombo-embolism and stroke events, which do not appear significantly reduced following successful pulmonary vein (PV) ablation. Prior studies supported that thrombus formation is associated with left atrial (LA) flow alterations, particularly flow stasis. Recently, time-resolved three-dimensional phase-contrast (4D-flow) showed the ability to quantify LA stasis. This study aims to demonstrate that LA stasis, derived from 4D-flow, is a useful biomarker of LA recovery in patients with AF. Our hypothesis is that LA recovery will be associated with a reduction in LA stasis. We recruited 148 subjects with paroxysmal AF (40 following 3–4 months PV ablation and 108 pre-PV ablation) and 24 controls (CTL). All subjects underwent a cardiac magnetic resonance imaging (MRI) exam, inclusive of 4D-flow. LA was isolated within the 4D-flow dataset to constrain stasis maps. Control mean LA stasis was lower than in the pre-ablation cohort (30 ± 12% vs. 47 ± 18%, p < 0.001). In addition, mean LA stasis was reduced in the post-ablation cohort compared with pre-ablation (36 ± 15% vs. 47 ± 18%, p = 0.002). This study demonstrated that 4D flow-derived LA stasis mapping is clinically relevant and revealed stasis changes in the LA body pre- and post-pulmonary vein ablation. Full article
Show Figures

Figure 1

12 pages, 2643 KiB  
Article
Early Monitoring Response to Therapy in Patients with Brain Lesions Using the Cumulative SUV Histogram
by Alessandro Stefano, Pietro Pisciotta, Marco Pometti, Albert Comelli, Sebastiano Cosentino, Francesco Marletta, Salvatore Cicero, Maria G. Sabini, Massimo Ippolito and Giorgio Russo
Appl. Sci. 2021, 11(7), 2999; https://doi.org/10.3390/app11072999 - 27 Mar 2021
Cited by 1 | Viewed by 1673
Abstract
Gamma Knife treatment is an alternative to traditional brain surgery and whole-brain radiation therapy for treating cancers that are inaccessible via conventional treatments. To assess the effectiveness of Gamma Knife treatments, functional imaging can play a crucial role. The aim of this study [...] Read more.
Gamma Knife treatment is an alternative to traditional brain surgery and whole-brain radiation therapy for treating cancers that are inaccessible via conventional treatments. To assess the effectiveness of Gamma Knife treatments, functional imaging can play a crucial role. The aim of this study is to evaluate new prognostic indices to perform an early assessment of treatment response to therapy using positron emission tomography imaging. The parameters currently used in nuclear medicine assessments can be affected by statistical fluctuation errors and/or cannot provide information on tumor extension and heterogeneity. To overcome these limitations, the Cumulative standardized uptake value (SUV) Histogram (CSH) and Area Under the Curve (AUC) indices were evaluated to obtain additional information on treatment response. For this purpose, the absolute level of [11C]-Methionine (MET) uptake was measured and its heterogeneity distribution within lesions was evaluated by calculating the CSH and AUC indices. CSH and AUC parameters show good agreement with patient outcomes after Gamma Knife treatments. Furthermore, no relevant correlations were found between CSH and AUC indices and those usually used in the nuclear medicine environment. CSH and AUC indices could be a useful tool for assessing patient responses to therapy. Full article
Show Figures

Figure 1

13 pages, 2306 KiB  
Article
Deep Learning-Based Methods for Prostate Segmentation in Magnetic Resonance Imaging
by Albert Comelli, Navdeep Dahiya, Alessandro Stefano, Federica Vernuccio, Marzia Portoghese, Giuseppe Cutaia, Alberto Bruno, Giuseppe Salvaggio and Anthony Yezzi
Appl. Sci. 2021, 11(2), 782; https://doi.org/10.3390/app11020782 - 15 Jan 2021
Cited by 54 | Viewed by 6260
Abstract
Magnetic Resonance Imaging-based prostate segmentation is an essential task for adaptive radiotherapy and for radiomics studies whose purpose is to identify associations between imaging features and patient outcomes. Because manual delineation is a time-consuming task, we present three deep-learning (DL) approaches, namely UNet, [...] Read more.
Magnetic Resonance Imaging-based prostate segmentation is an essential task for adaptive radiotherapy and for radiomics studies whose purpose is to identify associations between imaging features and patient outcomes. Because manual delineation is a time-consuming task, we present three deep-learning (DL) approaches, namely UNet, efficient neural network (ENet), and efficient residual factorized convNet (ERFNet), whose aim is to tackle the fully-automated, real-time, and 3D delineation process of the prostate gland on T2-weighted MRI. While UNet is used in many biomedical image delineation applications, ENet and ERFNet are mainly applied in self-driving cars to compensate for limited hardware availability while still achieving accurate segmentation. We apply these models to a limited set of 85 manual prostate segmentations using the k-fold validation strategy and the Tversky loss function and we compare their results. We find that ENet and UNet are more accurate than ERFNet, with ENet much faster than UNet. Specifically, ENet obtains a dice similarity coefficient of 90.89% and a segmentation time of about 6 s using central processing unit (CPU) hardware to simulate real clinical conditions where graphics processing unit (GPU) is not always available. In conclusion, ENet could be efficiently applied for prostate delineation even in small image training datasets with potential benefit for patient management personalization. Full article
Show Figures

Figure 1

Back to TopTop