Modern Advances in Image Fusion

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (14 December 2018) | Viewed by 30925

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Democritus University of Thrace, 67100 Xanthi, Greece
Interests: deep learning; computer vision; audio source separation; music information retrieval
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Communications and Signal Processing Research Group, Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK
Interests: image fusion; computer vision; remote sensing; urban monitoring; machine learning and deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Modern development of affordable multimodal visual sensors has opened a whole world of new capabilities for imaging and image processing, in general. The sensor list may include visual, depth, infrared, thermal and multispectral sensors. All these sensors offer complimentary or overlapping information for the observed scene. Instead of analysing each modality image separately, Image Fusion is an image processing approach that aims at transferring useful information from all input images to a single composite one that is going to be analysed. Over the last two decades, Image Fusion has shown to be a very popular image processing task, attracting many researchers from many different fields, including medical imaging, satellite imaging, High-Dynamic Range (HDR) photography and surveillance imaging. Many modern methodologies have emerged, such as deep learning, parallel computations and compressive sensing, which have changed modern image processing and computer vision.

The aim of this Special Issue is to present and highlight the newest trends in Image Fusion. This may include, but is not be limited to:

  • Novel Image Fusion methodologies
  • Novel Image Fusion frameworks
  • Novel Image Fusion applications
  • Image Fusion based on deep learning
  • Multiple-modality Image Fusion
  • Multi-focus Image Fusion
  • Parallelization in Image Fusion
  • High-Dynamic Range (HDR) imaging
  • Real-time Image Fusion
  • Image Fusion for medical applications
  • Statistical Image Fusion
  • Image Fusion using compressive sensing

Dr. Nikolaos Mitianoudis
Dr. Tania Stathaki
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Image Fusion
  • Multi-modal Image Fusion
  • High Dynamic Range imaging
  • Multi-focus fusion
  • Medical Image Fusion
  • Compressive sensing
  • Deep learning
  • Parallel image processing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 6050 KiB  
Article
Multi-Focus Image Fusion and Depth Map Estimation Based on Iterative Region Splitting Techniques
by Wen-Nung Lie and Chia-Che Ho
J. Imaging 2019, 5(9), 73; https://doi.org/10.3390/jimaging5090073 - 2 Sep 2019
Cited by 3 | Viewed by 4889
Abstract
In this paper, a multi-focus image stack captured by varying positions of the imaging plane is processed to synthesize an all-in-focus (AIF) image and estimate its corresponding depth map. Compared with traditional methods (e.g., pixel- and block-based techniques), our focus-based measures are calculated [...] Read more.
In this paper, a multi-focus image stack captured by varying positions of the imaging plane is processed to synthesize an all-in-focus (AIF) image and estimate its corresponding depth map. Compared with traditional methods (e.g., pixel- and block-based techniques), our focus-based measures are calculated based on irregularly shaped regions that have been refined or split in an iterative manner, to adapt to different image contents. An initial all-focus image is first computed, which is then segmented to get a region map. Spatial-focal property for each region is then analyzed to determine whether a region should be iteratively split into sub-regions. After iterative splitting, the final region map is used to perform regionally best focusing, based on the Winner-take-all (WTA) strategy, i.e., choosing the best focused pixels from image stack. The depth image can be easily converted from the resulting label image, where the label for each pixel represents the image index from which the pixel with the best focus is chosen. Regions whose focus profiles are not confident in getting a winner of the best focus will resort to spatial propagation from neighboring confident regions. Our experiments show that the adaptive region-splitting algorithm outperforms other state-of-the-art methods or commercial software in synthesis quality (in terms of a well-known Q metric), depth maps (in terms of subjective quality), and processing speed (with a gain of 17.81~40.43%). Full article
(This article belongs to the Special Issue Modern Advances in Image Fusion)
Show Figures

Figure 1

11 pages, 1045 KiB  
Article
Deep Learning for Breast Cancer Diagnosis from Mammograms—A Comparative Study
by Lazaros Tsochatzidis, Lena Costaridou and Ioannis Pratikakis
J. Imaging 2019, 5(3), 37; https://doi.org/10.3390/jimaging5030037 - 13 Mar 2019
Cited by 149 | Viewed by 15349
Abstract
Deep convolutional neural networks (CNNs) are investigated in the context of computer-aided diagnosis (CADx) of breast cancer. State-of-the-art CNNs are trained and evaluated on two mammographic datasets, consisting of ROIs depicting benign or malignant mass lesions. The performance evaluation of each examined network [...] Read more.
Deep convolutional neural networks (CNNs) are investigated in the context of computer-aided diagnosis (CADx) of breast cancer. State-of-the-art CNNs are trained and evaluated on two mammographic datasets, consisting of ROIs depicting benign or malignant mass lesions. The performance evaluation of each examined network is addressed in two training scenarios: the first involves initializing the network with pre-trained weights, while for the second the networks are initialized in a random fashion. Extensive experimental results show the superior performance achieved in the case of fine-tuning a pretrained network compared to training from scratch. Full article
(This article belongs to the Special Issue Modern Advances in Image Fusion)
Show Figures

Figure 1

15 pages, 24652 KiB  
Article
Multiple-Exposure Image Fusion for HDR Image Synthesis Using Learned Analysis Transformations
by Ioannis Merianos and Nikolaos Mitianoudis
J. Imaging 2019, 5(3), 32; https://doi.org/10.3390/jimaging5030032 - 26 Feb 2019
Cited by 19 | Viewed by 9839
Abstract
Modern imaging applications have increased the demand for High-Definition Range (HDR) imaging. Nonetheless, HDR imaging is not easily available with low-cost imaging sensors, since their dynamic range is rather limited. A viable solution to HDR imaging via low-cost imaging sensors is the synthesis [...] Read more.
Modern imaging applications have increased the demand for High-Definition Range (HDR) imaging. Nonetheless, HDR imaging is not easily available with low-cost imaging sensors, since their dynamic range is rather limited. A viable solution to HDR imaging via low-cost imaging sensors is the synthesis of multiple-exposure images. A low-cost sensor can capture the observed scene at multiple-exposure settings and an image-fusion algorithm can combine all these images to form an increased dynamic range image. In this work, two image-fusion methods are combined to tackle multiple-exposure fusion. The luminance channel is fused using the Mitianoudis and Stathaki (2008) method, while the color channels are combined using the method proposed by Mertens et al. (2007). The proposed fusion algorithm performs well without halo artifacts that exist in other state-of-the-art methods. This paper is an extension version of a conference, with more analysis on the derived method and more experimental results that confirm the validity of the method. Full article
(This article belongs to the Special Issue Modern Advances in Image Fusion)
Show Figures

Figure 1

Back to TopTop