Image and Video Processing in Medicine

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 October 2016) | Viewed by 62225

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
Department Software Engineering and Artificial Intelligence, Faculty of Informatics, University Complutense of Madrid, 28040 Madrid, Spain
Interests: computer vision; image processing; pattern recognition; 3D image reconstruction, spatio-temporal image change detection and tracking; fusion and registering from imaging sensors; superresolution from low-resolution image sensors
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing, Faculty of Computing and Engineering, Ulster University, Northern Ireland, UK
Interests: image processing; computer vision; medical and biomedical image analysis; 3D/4D image analytics; remote sensing

E-Mail Website
Guest Editor
Artificial Intelligence in Biomedical Imaging Lab (AIBI Lab), Laboratory for Future Interdisciplinary Research of Science and Technology, Institute of Innovative Research, Tokyo Institute of Technology, Tokyo 152-8550, Japan
Interests: deep learning; machine learning; computer-aided diagnosis; medical imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical imaging and video analysis are oriented to extract information from structures in the human body. Prevention, diagnosis and monitoring of diseases, caused by different pathologies, are the main objective.

Image and video acquisition, processing and interpretation are oriented toward the efficiency of anatomy and physiology analysis, to achieve the above objectives.

The following is a list of the main topics covered by this Special Issue. The issue will, however, not be limited to these topics:

  • Image and video acquisition instruments and technologies: visible cameras, radiography, tomography, magnetic resonance, nuclear-based devices, ultrasound (echocardiography), acoustic, thermography, neuroimaging.
  • Image processing techniques: enhancement, segmentation, texture analysis, image fusion.
  • Computing vision-based approaches: pattern recognition, 2D/3D structures.

Prof. Dr. Gonzalo Pajares Martinsanz
Prof. Dr. Philip Morrow
Dr. Kenji Suzuki
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

3496 KiB  
Article
3D Clumped Cell Segmentation Using Curvature Based Seeded Watershed
by Thomas Atta-Fosu, Weihong Guo, Dana Jeter, Claudia M. Mizutani, Nathan Stopczynski and Rui Sousa-Neves
J. Imaging 2016, 2(4), 31; https://doi.org/10.3390/jimaging2040031 - 05 Nov 2016
Cited by 22 | Viewed by 8814
Abstract
Image segmentation is an important process that separates objects from the background and also from each other. Applied to cells, the results can be used for cell counting which is very important in medical diagnosis and treatment, and biological research that is often [...] Read more.
Image segmentation is an important process that separates objects from the background and also from each other. Applied to cells, the results can be used for cell counting which is very important in medical diagnosis and treatment, and biological research that is often used by scientists and medical practitioners. Segmenting 3D confocal microscopy images containing cells of different shapes and sizes is still challenging as the nuclei are closely packed. The watershed transform provides an efficient tool in segmenting such nuclei provided a reasonable set of markers can be found in the image. In the presence of low-contrast variation or excessive noise in the given image, the watershed transform leads to over-segmentation (a single object is overly split into multiple objects). The traditional watershed uses the local minima of the input image and will characteristically find multiple minima in one object unless they are specified (marker-controlled watershed). An alternative to using the local minima is by a supervised technique called seeded watershed, which supplies single seeds to replace the minima for the objects. Consequently, the accuracy of a seeded watershed algorithm relies on the accuracy of the predefined seeds. In this paper, we present a segmentation approach based on the geometric morphological properties of the ‘landscape’ using curvatures. The curvatures are computed as the eigenvalues of the Shape matrix, producing accurate seeds that also inherit the original shape of their respective cells. We compare with some popular approaches and show the advantage of the proposed method. Full article
(This article belongs to the Special Issue Image and Video Processing in Medicine)
Show Figures

Figure 1

3930 KiB  
Article
Automatic Gleason Grading of Prostate Cancer Using Shearlet Transform and Multiple Kernel Learning
by Hadi Rezaeilouyeh and Mohammad H. Mahoor
J. Imaging 2016, 2(3), 25; https://doi.org/10.3390/jimaging2030025 - 09 Sep 2016
Cited by 5 | Viewed by 7576
Abstract
The Gleason grading system is generally used for histological grading of prostate cancer. In this paper, we first introduce using the Shearlet transform and its coefficients as texture features for automatic Gleason grading. The Shearlet transform is a mathematical tool defined based on [...] Read more.
The Gleason grading system is generally used for histological grading of prostate cancer. In this paper, we first introduce using the Shearlet transform and its coefficients as texture features for automatic Gleason grading. The Shearlet transform is a mathematical tool defined based on affine systems and can analyze signals at various orientations and scales and detect singularities, such as image edges. These properties make the Shearlet transform more suitable for Gleason grading compared to the other transform-based feature extraction methods, such as Fourier transform, wavelet transform, etc. We also extract color channel histograms and morphological features. These features are the essential building blocks of what pathologists consider when they perform Gleason grading. Then, we use the multiple kernel learning (MKL) algorithm for fusing all three different types of extracted features. We use support vector machines (SVM) equipped with MKL for the classification of prostate slides with different Gleason grades. Using the proposed method, we achieved high classification accuracy in a dataset containing 100 prostate cancer sample images of Gleason Grades 2–5. Full article
(This article belongs to the Special Issue Image and Video Processing in Medicine)
Show Figures

Graphical abstract

1625 KiB  
Article
Optimized Distributed Hyperparameter Search and Simulation for Lung Texture Classification in CT Using Hadoop
by Roger Schaer, Henning Müller and Adrien Depeursinge
J. Imaging 2016, 2(2), 19; https://doi.org/10.3390/jimaging2020019 - 07 Jun 2016
Cited by 12 | Viewed by 6485
Abstract
Many medical image analysis tasks require complex learning strategies to reach a quality of image-based decision support that is sufficient in clinical practice. The analysis of medical texture in tomographic images, for example of lung tissue, is no exception. Via a learning framework, [...] Read more.
Many medical image analysis tasks require complex learning strategies to reach a quality of image-based decision support that is sufficient in clinical practice. The analysis of medical texture in tomographic images, for example of lung tissue, is no exception. Via a learning framework, very good classification accuracy can be obtained, but several parameters need to be optimized. This article describes a practical framework for efficient distributed parameter optimization. The proposed solutions are applicable for many research groups with heterogeneous computing infrastructures and for various machine learning algorithms. These infrastructures can easily be connected via distributed computation frameworks. We use the Hadoop framework to run and distribute both grid and random search strategies for hyperparameter optimization and cross-validations on a cluster of 21 nodes composed of desktop computers and servers. We show that significant speedups of up to 364× compared to a serial execution can be achieved using our in-house Hadoop cluster by distributing the computation and automatically pruning the search space while still identifying the best-performing parameter combinations. To the best of our knowledge, this is the first article presenting practical results in detail for complex data analysis tasks on such a heterogeneous infrastructure together with a linked simulation framework that allows for computing resource planning. The results are directly applicable in many scenarios and allow implementing an efficient and effective strategy for medical (image) data analysis and related learning approaches. Full article
(This article belongs to the Special Issue Image and Video Processing in Medicine)
Show Figures

Graphical abstract

6602 KiB  
Article
FPGA-Based Portable Ultrasound Scanning System with Automatic Kidney Detection
by R. Bharath, Punit Kumar, Chandrashekar Dusa, Vivek Akkala, Suresh Puli, Harsha Ponduri, K. Divya Krishna, P. Rajalakshmi, S. N. Merchant, Mohammed Abdul Mateen and U. B. Desai
J. Imaging 2015, 1(1), 193-219; https://doi.org/10.3390/jimaging1010193 - 04 Dec 2015
Cited by 13 | Viewed by 10771
Abstract
Bedsides diagnosis using portable ultrasound scanning (PUS) offering comfortable diagnosis with various clinical advantages, in general, ultrasound scanners suffer from a poor signal-to-noise ratio, and physicians who operate the device at point-of-care may not be adequately trained to perform high level diagnosis. Such [...] Read more.
Bedsides diagnosis using portable ultrasound scanning (PUS) offering comfortable diagnosis with various clinical advantages, in general, ultrasound scanners suffer from a poor signal-to-noise ratio, and physicians who operate the device at point-of-care may not be adequately trained to perform high level diagnosis. Such scenarios can be eradicated by incorporating ambient intelligence in PUS. In this paper, we propose an architecture for a PUS system, whose abilities include automated kidney detection in real time. Automated kidney detection is performed by training the Viola–Jones algorithm with a good set of kidney data consisting of diversified shapes and sizes. It is observed that the kidney detection algorithm delivers very good performance in terms of detection accuracy. The proposed PUS with kidney detection algorithm is implemented on a single Xilinx Kintex-7 FPGA, integrated with a Raspberry Pi ARM processor running at 900 MHz. Full article
(This article belongs to the Special Issue Image and Video Processing in Medicine)
Show Figures

Graphical abstract

1727 KiB  
Article
Towards a Guidance System to Aid in the Dosimetry Calculation of Intraoperative Electron Radiation Therapy
by Cristina Portalés, Jesús Gimeno, Lucía Vera and Marcos Fernández
J. Imaging 2015, 1(1), 180-192; https://doi.org/10.3390/jimaging1010180 - 20 Nov 2015
Cited by 2 | Viewed by 5697
Abstract
In Intraoperative Electron Radiation Therapy (IOERT), the lack of specific planning tools limits its applicability, the need for accurate dosimetry estimation and application during the therapy being the most critical. Recently, some works have been presented that try to overcome some of the [...] Read more.
In Intraoperative Electron Radiation Therapy (IOERT), the lack of specific planning tools limits its applicability, the need for accurate dosimetry estimation and application during the therapy being the most critical. Recently, some works have been presented that try to overcome some of the limitations for establishing planning tools, though still an accurate guidance system that tracks, in real-time, the applicator and the patient is needed. In these surgical environments, the acquisition of an accurate 3D shape of the patient’s tumor bed in real-time is of high interest, as current systems make use of a 3D model acquired before the treatment. In this paper, an optical-based system is presented that is able to register, in real-time, the different objects (rigid objects) taking part in such a treatment. The presented guidance system and the related methodology are highly interactive, where a usability design has also been provided for non-expert users. Additionally, four different approaches are explored and evaluated to acquire the 3D model of the patient (non-rigid object) in real-time, where accuracies in the range of 1 mm can be achieved without the need of using expensive devices. Full article
(This article belongs to the Special Issue Image and Video Processing in Medicine)
Show Figures

Graphical abstract

2803 KiB  
Article
A Kinect-Based System for Upper-Body Function Assessment in Breast Cancer Patients
by Rita Moreira, André Magalhães and Hélder P. Oliveira
J. Imaging 2015, 1(1), 134-155; https://doi.org/10.3390/jimaging1010134 - 05 Nov 2015
Cited by 6 | Viewed by 6834
Abstract
Common breast cancer treatment techniques, such as radiation therapy or the surgical removal of the axillary lymphatic nodes, result in several impairments in women’s upper-body function. These impairments include restricted shoulder mobility and arm swelling. As a consequence, several daily life activities are [...] Read more.
Common breast cancer treatment techniques, such as radiation therapy or the surgical removal of the axillary lymphatic nodes, result in several impairments in women’s upper-body function. These impairments include restricted shoulder mobility and arm swelling. As a consequence, several daily life activities are affected, which contribute to a decreased quality of life (QOL). Therefore, it is of extreme importance to assess the functional restrictions caused by cancer treatment, in order to evaluate the quality of procedures and to avoid further complications. Although the research in this field is still very limited and the methods currently available suffer from a lack of objectivity, this highlights the relevance of the pioneer work presented in this paper, which aims to develop an effective method for the evaluation of the upper-body function, suitable for breast cancer patients. For this purpose, the use of both depth and skeleton data, provided by the Microsoft Kinect, is investigated to extract features of the upper-limbs motion. Supervised classification algorithms are used to construct a predictive model of classification, and very promising results are obtained, with high classification accuracy. Full article
(This article belongs to the Special Issue Image and Video Processing in Medicine)
Show Figures

Graphical abstract

Review

Jump to: Research

4312 KiB  
Review
Polyp Detection and Segmentation from Video Capsule Endoscopy: A Review
by V. B. Surya Prasath
J. Imaging 2017, 3(1), 1; https://doi.org/10.3390/jimaging3010001 - 23 Dec 2016
Cited by 45 | Viewed by 14206
Abstract
Video capsule endoscopy (VCE) is used widely nowadays for visualizing the gastrointestinal (GI) tract. Capsule endoscopy exams are prescribed usually as an additional monitoring mechanism and can help in identifying polyps, bleeding, etc. To analyze the large scale video data produced by VCE [...] Read more.
Video capsule endoscopy (VCE) is used widely nowadays for visualizing the gastrointestinal (GI) tract. Capsule endoscopy exams are prescribed usually as an additional monitoring mechanism and can help in identifying polyps, bleeding, etc. To analyze the large scale video data produced by VCE exams, automatic image processing, computer vision, and learning algorithms are required. Recently, automatic polyp detection algorithms have been proposed with various degrees of success. Though polyp detection in colonoscopy and other traditional endoscopy procedure based images is becoming a mature field, due to its unique imaging characteristics, detecting polyps automatically in VCE is a hard problem. We review different polyp detection approaches for VCE imagery and provide systematic analysis with challenges faced by standard image processing and computer vision methods. Full article
(This article belongs to the Special Issue Image and Video Processing in Medicine)
Show Figures

Graphical abstract

Back to TopTop