applsci-logo

Journal Browser

Journal Browser

New Frontiers in Medical Image Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Robotics and Automation".

Deadline for manuscript submissions: closed (10 October 2022) | Viewed by 22000

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical Engineering, Chang Gung University, Tao-Yuan 33302, Taiwan
Interests: medical imaging processing; pattern recognition; computer visualization; VLSI design
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Informatics, Kainan University, Tao-Yuan 33857, Taiwan
Interests: digital image processing; artificial intelligence; machine vision; digital signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The aim of this Special Issue is to highlight the latest developments in the discipline of medical image processing, including new medical imaging technologies, new methods of improving noise degradations, enhancing raw data and other novel algorithms. Our goal is to explore the complexities of medical image data to help extract precise medical information in order to determine the necessity or plan for invasive surgeries.

We hope to establish a collection of papers that will be of interest to scholars in the field.

Contributions in the form of full papers, reviews, and communications about the related topics are very welcome.

Prof. Dr. Jiann-Der Lee
Prof. Dr. Jong-Chih Chien
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 14459 KiB  
Article
The Usefulness of Gradient-Weighted CAM in Assisting Medical Diagnoses
by Jong-Chih Chien, Jiann-Der Lee, Ching-Shu Hu and Chieh-Tsai Wu
Appl. Sci. 2022, 12(15), 7748; https://doi.org/10.3390/app12157748 - 1 Aug 2022
Cited by 6 | Viewed by 2024
Abstract
In modern medicine, medical imaging technologies such as computed tomography (CT), X-ray, ultrasound, magnetic resonance imaging (MRI), nuclear medicine, etc., have been proven to provide useful diagnostic information by displaying areas of a lesion or tumor not visible to the human eye, and [...] Read more.
In modern medicine, medical imaging technologies such as computed tomography (CT), X-ray, ultrasound, magnetic resonance imaging (MRI), nuclear medicine, etc., have been proven to provide useful diagnostic information by displaying areas of a lesion or tumor not visible to the human eye, and may also help provide additional recessive information by using modern data analysis methods. These methods, including Artificial Intelligence (AI) technologies, are based on deep learning architectures, and have shown remarkable results in recent studies. However, the lack of explanatory ability of connection-based, instead of algorithm-based, deep learning technologies is one of the main reasons for the delay in the acceptance of these technologies in the mainstream medical field. One of the recent methods that may offer the explanatory ability for the CNN classes of deep learning neural networks is the gradient-weighted class activation mapping (Grad-CAM) method, which produces heat-maps that may offer explanations of the classification results. There are already many studies in the literature that compare the objective metrics of Grad-CAM-generated heat-maps against other methods. However, the subjective evaluation of AI-based classification/prediction results using medical images by qualified personnel could potentially contribute more to the acceptance of AI than objective metrics. The purpose of this paper is to investigate whether and how the Grad-CAM heat-maps can help physicians and radiologists in making diagnoses by presenting the results from AI-based classifications as well as their associated Grad-CAM-generated heat-maps to a qualified radiologist. The results of this study show that the radiologist considers Grad-CAM-generated heat-maps to be generally helpful toward diagnosis. Full article
(This article belongs to the Special Issue New Frontiers in Medical Image Processing)
Show Figures

Figure 1

13 pages, 9419 KiB  
Article
Mask Branch Network: Weakly Supervised Branch Network with a Template Mask for Classifying Masses in 3D Automated Breast Ultrasound
by Daekyung Kim, Haesol Park, Mijung Jang and Kyong-Joon Lee
Appl. Sci. 2022, 12(13), 6332; https://doi.org/10.3390/app12136332 - 22 Jun 2022
Cited by 1 | Viewed by 1234
Abstract
Automated breast ultrasound (ABUS) is being rapidly utilized for screening and diagnosing breast cancer. Breast masses, including cancers shown in ABUS scans, often appear as irregular hypoechoic areas that are hard to distinguish from background shadings. We propose a novel branch network architecture [...] Read more.
Automated breast ultrasound (ABUS) is being rapidly utilized for screening and diagnosing breast cancer. Breast masses, including cancers shown in ABUS scans, often appear as irregular hypoechoic areas that are hard to distinguish from background shadings. We propose a novel branch network architecture incorporating segmentation information of masses in the training process. The branch network is integrated into neural network, providing the spatial attention effect. The branch network boosts the performance of existing classifiers, helping to learn meaningful features around the target breast mass. For the segmentation information, we leverage the existing radiology reports without additional labeling efforts. The reports, which is generated in medical image reading process, should include the characteristics of breast masses, such as shape and orientation, and a template mask can be created in a rule-based manner. Experimental results show that the proposed branch network with a template mask significantly improves the performance of existing classifiers. We also provide qualitative interpretation of the proposed method by visualizing the attention effect on target objects. Full article
(This article belongs to the Special Issue New Frontiers in Medical Image Processing)
Show Figures

Figure 1

16 pages, 886 KiB  
Article
Skin Cancer Disease Detection Using Transfer Learning Technique
by Javed Rashid, Maryam Ishfaq, Ghulam Ali, Muhammad R. Saeed, Mubasher Hussain, Tamim Alkhalifah, Fahad Alturise and Noor Samand
Appl. Sci. 2022, 12(11), 5714; https://doi.org/10.3390/app12115714 - 3 Jun 2022
Cited by 38 | Viewed by 8087
Abstract
Melanoma is a fatal type of skin cancer; the fury spread results in a high fatality rate when the malignancy is not treated at an initial stage. The patients’ lives can be saved by accurately detecting skin cancer at an initial stage. A [...] Read more.
Melanoma is a fatal type of skin cancer; the fury spread results in a high fatality rate when the malignancy is not treated at an initial stage. The patients’ lives can be saved by accurately detecting skin cancer at an initial stage. A quick and precise diagnosis might help increase the patient’s survival rate. It necessitates the development of a computer-assisted diagnostic support system. This research proposes a novel deep transfer learning model for melanoma classification using MobileNetV2. The MobileNetV2 is a deep convolutional neural network that classifies the sample skin lesions as malignant or benign. The performance of the proposed deep learning model is evaluated using the ISIC 2020 dataset. The dataset contains less than 2% malignant samples, raising the class imbalance. Various data augmentation techniques were applied to tackle the class imbalance issue and add diversity to the dataset. The experimental results demonstrate that the proposed deep learning technique outperforms state-of-the-art deep learning techniques in terms of accuracy and computational cost. Full article
(This article belongs to the Special Issue New Frontiers in Medical Image Processing)
Show Figures

Figure 1

16 pages, 2913 KiB  
Article
Reconstruction of Preclinical PET Images via Chebyshev Polynomial Approximation of the Sinogram
by Nicholas E. Protonotarios, Athanassios S. Fokas, Alexandros Vrachliotis, Vangelis Marinakis, Nikolaos Dikaios and George A. Kastis
Appl. Sci. 2022, 12(7), 3335; https://doi.org/10.3390/app12073335 - 25 Mar 2022
Cited by 3 | Viewed by 2248
Abstract
Over the last decades, there has been an increasing interest in dedicated preclinical imaging modalities for research in biomedicine. Especially in the case of positron emission tomography (PET), reconstructed images provide useful information of the morphology and function of an internal organ. PET [...] Read more.
Over the last decades, there has been an increasing interest in dedicated preclinical imaging modalities for research in biomedicine. Especially in the case of positron emission tomography (PET), reconstructed images provide useful information of the morphology and function of an internal organ. PET data, stored as sinograms, involve the Radon transform of the image under investigation. The analytical approach to PET image reconstruction incorporates the derivative of the Hilbert transform of the sinogram. In this direction, in the present work we present a novel numerical algorithm for the inversion of the Radon transform based on Chebyshev polynomials of the first kind. By employing these polynomials, the computation of the derivative of the Hilbert transform of the sinogram is significantly simplified. Extending the mathematical setting of previous research based on Chebyshev polynomials, we are able to efficiently apply our new Chebyshev inversion scheme for the case of analytic preclinical PET image reconstruction. We evaluated our reconstruction algorithm on projection data from a small-animal image quality (IQ) simulated phantom study, in accordance with the NEMA NU 4-2008 standards protocol. In particular, we quantified our reconstructions via the image quality metrics of percentage standard deviation, recovery coefficient, and spill-over ratio. The projection data employed were acquired for three different Poisson noise levels: 100% (NL1), 50% (NL2), and 20% (NL3) of the total counts, respectively. In the uniform region of the IQ phantom, Chebyshev reconstructions were consistently improved over filtered backprojection (FBP), in terms of percentage standard deviation (up to 29% lower, depending on the noise level). For all rods, we measured the contrast-to-noise-ratio, indicating an improvement of up to 68% depending on the noise level. In order to compare our reconstruction method with FBP, at equal noise levels, plots of recovery coefficient and spill-over ratio as functions of the percentage standard deviation were generated, after smoothing the NL3 reconstructions with three different Gaussian filters. When post-smoothing was applied, Chebyshev demonstrated recovery coefficient values up to 14% and 42% higher, for rods 1–3 mm and 4–5 mm, respectively, compared to FBP, depending on the smoothing sigma values. Our results indicate that our Chebyshev-based analytic reconstruction method may provide PET reconstructions that are comparable to FBP, thus yielding a good alternative to standard analytic preclinical PET reconstruction methods. Full article
(This article belongs to the Special Issue New Frontiers in Medical Image Processing)
Show Figures

Figure 1

14 pages, 5594 KiB  
Article
CT-Video Matching for Retrograde Intrarenal Surgery Based on Depth Prediction and Style Transfer
by Honglin Lei, Yanqi Pan, Tao Yu, Zuoming Fu, Chongan Zhang, Xinsen Zhang, Peng Wang, Jiquan Liu, Xuesong Ye and Huilong Duan
Appl. Sci. 2021, 11(20), 9585; https://doi.org/10.3390/app11209585 - 14 Oct 2021
Cited by 1 | Viewed by 1695
Abstract
Retrograde intrarenal surgery (RIRS) is a minimally invasive endoscopic procedure for the treatment of kidney stones. Traditionally, RIRS is usually performed by reconstructing a 3D model of the kidney from preoperative CT images in order to locate the kidney stones; then, the surgeon [...] Read more.
Retrograde intrarenal surgery (RIRS) is a minimally invasive endoscopic procedure for the treatment of kidney stones. Traditionally, RIRS is usually performed by reconstructing a 3D model of the kidney from preoperative CT images in order to locate the kidney stones; then, the surgeon finds and removes the stones with experience in endoscopic video. However, due to the many branches within the kidney, it can be difficult to relocate each lesion and to ensure that all branches are searched, which may result in the misdiagnosis of some kidney stones. To avoid this situation, we propose a convolutional neural network (CNN)-based method for matching preoperative CT images and intraoperative videos for the navigation of ureteroscopic procedures. First, a pair of synthetic images and depth maps reflecting preoperative information are obtained from a 3D model of the kidney. Then, a style transfer network is introduced to transfer the ureteroscopic images to the synthetic images, which can generate the associated depth maps. Finally, the fusion and matching of depth maps of preoperative images and intraoperative video images are realized based on semantic features. Compared with the traditional CT-video matching method, our method achieved a five times improvement in time performance and a 26% improvement in the top 10 accuracy. Full article
(This article belongs to the Special Issue New Frontiers in Medical Image Processing)
Show Figures

Figure 1

17 pages, 32557 KiB  
Article
Automatic Surgical Instrument Recognition—A Case of Comparison Study between the Faster R-CNN, Mask R-CNN, and Single-Shot Multi-Box Detectors
by Jiann-Der Lee, Jong-Chih Chien, Yu-Tsung Hsu and Chieh-Tsai Wu
Appl. Sci. 2021, 11(17), 8097; https://doi.org/10.3390/app11178097 - 31 Aug 2021
Cited by 13 | Viewed by 5431
Abstract
In various studies, problems with surgical instruments in the operating room are usually one of the major causes of delays and errors. It would be of great help, in surgery, to quickly and automatically identify and keep count of the surgical instruments in [...] Read more.
In various studies, problems with surgical instruments in the operating room are usually one of the major causes of delays and errors. It would be of great help, in surgery, to quickly and automatically identify and keep count of the surgical instruments in the operating room using only video information. In this study, the recognition rate of fourteen surgical instruments is studied using the Faster R-CNN, Mask R-CNN, and Single Shot Multi-Box Detectors, which are three deep learning networks in recent studies that exhibited near real-time object detection and identification performance. In our experimental studies using screen captures of real surgery video clips for training and testing, this study found that that acceptable accuracy and speed tradeoffs can be achieved by the Mask R-CNN classifier, which exhibited an overall average precision of 98.94% for all the instruments. Full article
(This article belongs to the Special Issue New Frontiers in Medical Image Processing)
Show Figures

Figure 1

Back to TopTop