applsci-logo

Journal Browser

Journal Browser

Biomedical Imaging: From Methods to Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (20 August 2023) | Viewed by 22439

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Physical Sciences, University Complutense of Madrid, CEI Moncloa, 28040 Madrid, Spain
Interests: image reconstruction; PET; CT; US; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Brigham and women's hospital, Harvard Medical School, Boston, MA 02115, USA
Interests: image reconstruction; CT; MRI; image harmonization; quantitative imaging

Special Issue Information

Dear Colleagues,

Biomedical imaging covers all the processes involved in the acquisition, processing, visualization, and analysis of structural or functional images of living specimens or systems. It includes both biological and medical applications. Examples of biomedical image modalities include X-ray, CT, MRI and fMRI, PET, SPECT, MEG, microscope/fluorescence imaging, and photoacoustic, among others. It is an exciting topic which is continuously evolving.  

The purpose of this Special Issue is to provide an overview of the new developments in Biomedical Imaging. Potential topics include, but are not limited to, new biomedical image modalities, multimodality imaging, new developments in image processing methods such as machine learning and deep learning techniques, as well as new algorithms and computational methods applied to biomedical imaging.

New methodological approaches, as well as in vitro, in silico or in vivo studies, that challenge current thinking in biomedical imaging research are warmly welcomed topics. Review studies, including those that use conceptual frameworks for any of the aforementioned topics, will also be welcomed.

Prof. Dr. Joaquin Lopez Herraiz
Dr. Gonzalo Vegas Sánchez-Ferrero
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biomedical Image Denoising
  • Biomedical Image Reconstruction
  • Biomedical Image Segmentation
  • Biomedical Registration Methods
  • Feature Recognition
  • Biomedical Image Classification
  • Machine Learning
  • Deep Learning (DL)
  • Neural Network (NN)
  • Convolutional Neural Network (CNN)
  • Kinetic Modeling
  • Super-Resolution
  • X-ray Imaging
  • Computer Tomography (CT)
  • Magnetic Resonance Imaging (MRI and fMRI)
  • Positron Emission Tomography (PET)
  • Single-Photon Emission Computer Tomography
  • Magnetoencephalography (MEG)
  • Fluorescence Imaging
  • Photoacoustic Imaging
  • Quantitative Imaging

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 7076 KiB  
Article
CycleGAN-Driven MR-Based Pseudo-CT Synthesis for Knee Imaging Studies
by Daniel Vallejo-Cendrero, Juan Manuel Molina-Maza, Blanca Rodriguez-Gonzalez, David Viar-Hernandez, Borja Rodriguez-Vila, Javier Soto-Pérez-Olivares, Jaime Moujir-López, Carlos Suevos-Ballesteros, Javier Blázquez-Sánchez, José Acosta-Batlle and Angel Torrado-Carvajal
Appl. Sci. 2024, 14(11), 4655; https://doi.org/10.3390/app14114655 - 28 May 2024
Cited by 1 | Viewed by 946
Abstract
In the field of knee imaging, the incorporation of MR-based pseudo-CT synthesis holds the potential to mitigate the need for separate CT scans, simplifying workflows, enhancing patient comfort, and reducing radiation exposure. In this work, we present a novel DL framework, grounded in [...] Read more.
In the field of knee imaging, the incorporation of MR-based pseudo-CT synthesis holds the potential to mitigate the need for separate CT scans, simplifying workflows, enhancing patient comfort, and reducing radiation exposure. In this work, we present a novel DL framework, grounded in the development of the Cycle-Consistent Generative Adversarial Network (CycleGAN) method, tailored specifically for the synthesis of pseudo-CT images in knee imaging to surmount the limitations of current methods. Upon visually examining the outcomes, it is evident that the synthesized pseudo-CTs show an excellent quality and high robustness. Despite the limited dataset employed, the method is able to capture the particularities of the bone contours in the resulting image. The experimental Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), Zero-Normalized Cross Correlation (ZNCC), Mutual Information (MI), Relative Change (RC), and absolute Relative Change (|RC|) report values of 30.4638 ± 7.4770, 28.1168 ± 1.5245, 0.9230 ± 0.0217, 0.9807 ± 0.0071, 0.8548 ± 0.1019, 0.0055 ± 0.0265, and 0.0302 ± 0.0218 (median ± median absolute deviation), respectively. The voxel-by-voxel correlation plot shows an excellent correlation between pseudo-CT and ground-truth CT Hounsfield units (m = 0.9785; adjusted R2 = 0.9988; ρ = 0.9849; p < 0.001). The Bland–Altman plot shows that the average of the differences is low ((HUCTHUpseudoCT = 0.7199 ± 35.2490; 95% confidence interval [−68.3681, 69.8079]). This study represents the first reported effort in the field of MR-based knee pseudo-CT synthesis, shedding light to significantly advance the field of knee imaging. Full article
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)
Show Figures

Figure 1

8 pages, 3744 KiB  
Communication
Development and Evaluation of Deep Learning-Based Reconstruction Using Preclinical 7T Magnetic Resonance Imaging
by Naoki Tsuji, Takuma Kobayashi, Junpei Ueda and Shigeyoshi Saito
Appl. Sci. 2023, 13(11), 6567; https://doi.org/10.3390/app13116567 - 29 May 2023
Viewed by 1629
Abstract
This study investigated a method for improving the quality of images with a low number of excitations (NEXs) based on deep learning using T2-weighted magnetic resonance imaging (MRI) of the heads of normal Wistar rats to achieve higher image quality and [...] Read more.
This study investigated a method for improving the quality of images with a low number of excitations (NEXs) based on deep learning using T2-weighted magnetic resonance imaging (MRI) of the heads of normal Wistar rats to achieve higher image quality and a shorter acquisition time. A 7T MRI was used to acquire T2-weighted images of the whole brain with NEXs = 2, 4, 8, and 12. As a preprocessing step, non-rigid registration of the acquired low NEX images (NEXs = 2, 4, 8) and NEXs = 12 images was performed. A residual dense network (RDN) was used for training. A low NEX image was used as the input to the RDN, and the NEX12 image was used as the correct image. For quantitative evaluation, we measured the signal-to-noise ratio (SNR), peak SNR, and structural similarity index measure of the original image and the image obtained by RDN. The NEX2 results are presented as an example. The SNR of the cortex was 10.4 for NEX2, whereas the SNR of the image reconstructed with RDN for NEX2 was 32.1. (The SNR NEX12 was 19.6) In addition, the PSNR in NEX2 was significantly increased to 35.4 ± 2.0 compared to the input image and to 37.6 ± 2.9 compared to the reconstructed image (p = 0.05). The SSIM in NEX2 was 0.78 ± 0.05 compared to the input image and 0.91 ± 0.05 compared to the reconstructed image (p = 0.0003). Furthermore, NEX2 succeeded in reducing the shooting time by 83%. Therefore, in preclinical 7T MRI, supervised learning between the NEXs using RDNs can potentially improve the image quality of low NEX images and shorten the acquisition time. Full article
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)
Show Figures

Figure 1

11 pages, 1846 KiB  
Article
Radiological Features of B3 Lesions in Mutation Carrier Patients: A Single-Center Retrospective Analysis
by Claudia Lucia Piccolo, Carlo Augusto Mallio, Laura Messina, Manuela Tommasiello, Paolo Orsaria, Vittorio Altomare, Matteo Sammarra and Bruno Beomonte Zobel
Appl. Sci. 2023, 13(8), 4994; https://doi.org/10.3390/app13084994 - 16 Apr 2023
Cited by 3 | Viewed by 1422
Abstract
Background. To evaluate the radiological features of B3 lesions in patients with genetic mutations to establish an anatomo-radiological correlation. Methods. A total of 227 women with a histological diagnosis of B3 breast lesion were enrolled. Breast images of 21 patients with genetic test [...] Read more.
Background. To evaluate the radiological features of B3 lesions in patients with genetic mutations to establish an anatomo-radiological correlation. Methods. A total of 227 women with a histological diagnosis of B3 breast lesion were enrolled. Breast images of 21 patients with genetic test positivity for mutations in genes associated with breast cancer were analyzed. Results. BRCA1 was the most frequent mutation (n = 12) followed by ATM (n = 6) and BRCA2 (n = 3). The histological findings showed nine atypical ductal hyperplasia (ADH), six lobular neoplasia (LN) including lobular carcinoma in situ (LCIS), three flat epithelial atypia (FEA) and three radial scar (RS) lesions. The results showed a significance difference between B3 lesion distribution in the three subgroups of mutations. LN and FEA showed the highest malignancy correlation. Patient age and risk anamnesis were factors that significantly influenced the malignancy rate. By mammography, 90.5% of lesions appeared as microcalcifications. By ultrasound, 13 lesions were observed as hypoechoic lesions. On breast MRI, 16 lesions were detected as a mass enhancement in all groups. DWI and kinetic curves significantly correlated with the risk of cancer. Conclusions. The radiological features of B3 lesions may help in the diagnosis of breast cancer malignancy. The high malignancy rate of cancer in our sample suggests they should always be surgically excised. Full article
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)
Show Figures

Figure 1

20 pages, 4642 KiB  
Article
Automatic Detection of Diabetic Hypertensive Retinopathy in Fundus Images Using Transfer Learning
by Dimple Nagpal, Najah Alsubaie, Ben Othman Soufiene, Mohammed S. Alqahtani, Mohamed Abbas and Hussain M. Almohiy
Appl. Sci. 2023, 13(8), 4695; https://doi.org/10.3390/app13084695 - 7 Apr 2023
Cited by 10 | Viewed by 3141
Abstract
Diabetic retinopathy (DR) is a complication of diabetes that affects the eyes. It occurs when high blood sugar levels damage the blood vessels in the retina, the light-sensitive tissue at the back of the eye. Therefore, there is a need to detect DR [...] Read more.
Diabetic retinopathy (DR) is a complication of diabetes that affects the eyes. It occurs when high blood sugar levels damage the blood vessels in the retina, the light-sensitive tissue at the back of the eye. Therefore, there is a need to detect DR in the early stages to reduce the risk of blindness. Transfer learning is a machine learning technique where a pre-trained model is used as a starting point for a new task. Transfer learning has been applied to diabetic retinopathy classification with promising results. Pre-trained models, such as convolutional neural networks (CNNs), can be fine-tuned on a new dataset of retinal images to classify diabetic retinopathy. This manuscript aims at developing an automated scheme for diagnosing and grading DR and HR. The retinal image classification has been performed using three phases that include preprocessing, segmentation and feature extraction techniques. The pre-processing methodology has been proposed for reducing the noise in retinal images. A-CLAHE, DNCNN and Wiener filter techniques have been applied for the enhancement of images. After pre-processing, blood vessel segmentation in retinal images has been performed utilizing OTSU thresholding and mathematical morphology. Feature extraction and classification have been performed using transfer learning models. The segmented images were then classified using Modified ResNet 101 architecture. The performance for enhanced images has been evaluated on PSNR and shows better results as compared to the existing literature. The network is trained on more than 6000 images from MESSIDOR and ODIR datasets and achieves the classification accuracy of 98.72%. Full article
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)
Show Figures

Figure 1

23 pages, 2876 KiB  
Article
Dual-Tracer PET Image Separation by Deep Learning: A Simulation Study
by Bolin Pan, Paul K. Marsden and Andrew J. Reader
Appl. Sci. 2023, 13(7), 4089; https://doi.org/10.3390/app13074089 - 23 Mar 2023
Cited by 9 | Viewed by 3054
Abstract
Multiplexed positron emission tomography (PET) imaging provides perfectly registered simultaneous functional and molecular imaging of more than one biomarker. However, the separation of the multiplexed PET signals within a single PET scan is challenging due to the fact that all PET tracers emit [...] Read more.
Multiplexed positron emission tomography (PET) imaging provides perfectly registered simultaneous functional and molecular imaging of more than one biomarker. However, the separation of the multiplexed PET signals within a single PET scan is challenging due to the fact that all PET tracers emit positrons, which, after annihilating with a nearby electron, give rise to 511 keV photon pairs that are detected in coincidence. Compartment modelling can separate single-tracer PET signals from multiplexed signals based on the differences in bio-distribution kinetics and radioactive decay. However, the compartment-modelling-based method requires staggered injections and assumes that each tracer’s input function is known. In this paper, we propose a deep-learning-based method to simultaneously separate dual-tracer PET signals without explicitly knowing the input functions. We evaluate the proposed deep-learning-based separation method on dual-tracer [18F]FDG and [11C]MET PET simulations and compare its separation performance to that of the compartment-modelling-based method, assessing performance dependence on the time interval between tracer injections as well as on the amount of training data. It is shown that the proposed method implicitly denoises the separated images and offers reduced variance in the separated images compared to compartment modelling. Full article
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)
Show Figures

Figure 1

12 pages, 12620 KiB  
Article
Inter-Rater Variability in the Evaluation of Lung Ultrasound in Videos Acquired from COVID-19 Patients
by Joaquin L. Herraiz, Clara Freijo, Jorge Camacho, Mario Muñoz, Ricardo González, Rafael Alonso-Roca, Jorge Álvarez-Troncoso, Luis Matías Beltrán-Romero, Máximo Bernabeu-Wittel, Rafael Blancas, Antonio Calvo-Cebrián, Ricardo Campo-Linares, Jaldún Chehayeb-Morán, Jose Chorda-Ribelles, Samuel García-Rubio, Gonzalo García-de-Casasola, Adriana Gil-Rodrigo, César Henríquez-Camacho, Alba Hernandez-Píriz, Carlos Hernandez-Quiles, Rafael Llamas-Fuentes, Davide Luordo, Raquel Marín-Baselga, María Cristina Martínez-Díaz, María Mateos-González, Manuel Mendez-Bailon, Francisco Miralles-Aguiar, Ramón Nogue, Marta Nogué, Borja Ortiz de Urbina-Antia, Alberto Ángel Oviedo-García, José M. Porcel, Santiago Rodriguez, Diego Aníbal Rodríguez-Serrano, Talía Sainz, Ignacio Manuel Sánchez-Barrancos, Marta Torres-Arrese, Juan Torres-Macho, Angela Trueba Vicente, Tomas Villén-Villegas, Juan José Zafra-Sánchez and Yale Tung-Chenadd Show full author list remove Hide full author list
Appl. Sci. 2023, 13(3), 1321; https://doi.org/10.3390/app13031321 - 18 Jan 2023
Cited by 10 | Viewed by 3947
Abstract
Lung ultrasound (LUS) allows for the detection of a series of manifestations of COVID-19, such as B-lines and consolidations. The objective of this work was to study the inter-rater reliability (IRR) when detecting signs associated with COVID-19 in the LUS, as well as [...] Read more.
Lung ultrasound (LUS) allows for the detection of a series of manifestations of COVID-19, such as B-lines and consolidations. The objective of this work was to study the inter-rater reliability (IRR) when detecting signs associated with COVID-19 in the LUS, as well as the performance of the test in a longitudinal or transverse orientation. Thirty-three physicians with advanced experience in LUS independently evaluated ultrasound videos previously acquired using the ULTRACOV system on 20 patients with confirmed COVID-19. For each patient, 24 videos of 3 s were acquired (using 12 positions with the probe in longitudinal and transverse orientations). The physicians had no information about the patients or other previous evaluations. The score assigned to each acquisition followed the convention applied in previous studies. A substantial IRR was found in the cases of normal LUS (κ = 0.74), with only a fair IRR for the presence of individual B-lines (κ = 0.36) and for confluent B-lines occupying < 50% (κ = 0.26) and a moderate IRR in consolidations and B-lines > 50% (κ = 0.50). No statistically significant differences between the longitudinal and transverse scans were found. The IRR for LUS of COVID-19 patients may benefit from more standardized clinical protocols. Full article
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)
Show Figures

Figure 1

21 pages, 5729 KiB  
Article
A Deep Learning Approach to Upscaling “Low-Quality” MR Images: An In Silico Comparison Study Based on the UNet Framework
by Rishabh Sharma, Panagiotis Tsiamyrtzis, Andrew G. Webb, Ioannis Seimenis, Constantinos Loukas, Ernst Leiss and Nikolaos V. Tsekos
Appl. Sci. 2022, 12(22), 11758; https://doi.org/10.3390/app122211758 - 19 Nov 2022
Cited by 6 | Viewed by 3268
Abstract
MR scans of low-gamma X-nuclei, low-concentration metabolites, or standard imaging at very low field entail a challenging tradeoff between resolution, signal-to-noise, and acquisition duration. Deep learning (DL) techniques, such as UNets, can potentially be used to improve such “low-quality” (LQ) images. We investigate [...] Read more.
MR scans of low-gamma X-nuclei, low-concentration metabolites, or standard imaging at very low field entail a challenging tradeoff between resolution, signal-to-noise, and acquisition duration. Deep learning (DL) techniques, such as UNets, can potentially be used to improve such “low-quality” (LQ) images. We investigate three UNets for upscaling LQ MRI: dense (DUNet), robust (RUNet), and anisotropic (AUNet). These were evaluated for two acquisition scenarios. In the same-subject High-Quality Complementary Priors (HQCP) scenario, an LQ and a high quality (HQ) image are collected and both LQ and HQ were inputs to the UNets. In the No Complementary Priors (NoCP) scenario, only the LQ images are collected and used as the sole input to the UNets. To address the lack of same-subject LQ and HQ images, we added data from the OASIS-1 database. The UNets were tested in upscaling 1/8, 1/4, and 1/2 undersampled images for both scenarios. As manifested by non-statically significant differences of matrices, also supported by subjective observation, the three UNets upscaled images equally well. This was in contrast to mixed effects statistics that clearly illustrated significant differences. Observations suggest that the detailed architecture of these UNets may not play a critical role. As expected, HQCP substantially improves upscaling with any of the UNets. The outcomes support the notion that DL methods may have merit as an integral part of integrated holistic approaches in advancing special MRI acquisitions; however, primary attention should be paid to the foundational step of such approaches, i.e., the actual data collected. Full article
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 11368 KiB  
Review
Magnetic Resonance Roadmap in Detecting and Staging Endometriosis: Usual and Unusual Localizations
by Claudia Lucia Piccolo, Laura Cea, Martina Sbarra, Anna Maria De Nicola, Carlo De Cicco Nardone, Eliodoro Faiella, Rosario Francesco Grasso and Bruno Beomonte Zobel
Appl. Sci. 2023, 13(18), 10509; https://doi.org/10.3390/app131810509 - 21 Sep 2023
Viewed by 3227
Abstract
Endometriosis is a chronic condition characterized by the presence of abnormal endometrial tissue outside the uterus. These misplaced cells are responsible for inflammation, symptoms, scar tissue and adhesions. Endometriosis manifests mainly in three patterns: superficial peritoneal lesions (SUP), ovarian endometriomas (OMA) and deep [...] Read more.
Endometriosis is a chronic condition characterized by the presence of abnormal endometrial tissue outside the uterus. These misplaced cells are responsible for inflammation, symptoms, scar tissue and adhesions. Endometriosis manifests mainly in three patterns: superficial peritoneal lesions (SUP), ovarian endometriomas (OMA) and deep infiltrating endometriosis (DIE). It also exhibits atypical and extremely rare localization. The updated 2022 guidelines of the ESHRE recommend using both ultrasound and magnetic resonance imaging (MRI) as first-line diagnostic tests. Currently, MRI provides a more complete view of the pelvis anatomy. The aim of our review is to provide radiologists with a “map” that can help them in reporting pelvic MRI scans in patients with endometriosis. We will illustrate the usual and unusual localizations of endometriosis (categorized into compartments) using post-operative imaging, and we will focus on the role of MRI, the main sequences and the use of contrast agents. Full article
(This article belongs to the Special Issue Biomedical Imaging: From Methods to Applications)
Show Figures

Figure 1

Back to TopTop