Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = all-in-focus

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 7045 KB  
Article
Convolutional Neural Networks for Hole Inspection in Aerospace Systems
by Garrett Madison, Grayson Michael Griser, Gage Truelson, Cole Farris, Christopher Lee Colaw and Yildirim Hurmuzlu
Sensors 2025, 25(18), 5921; https://doi.org/10.3390/s25185921 - 22 Sep 2025
Cited by 1 | Viewed by 1459
Abstract
Foreign object debris (FOd) in rivet holes, machined holes, and fastener sites poses a critical risk to aerospace manufacturing, where current inspections rely on manual visual checks with flashlights and mirrors. These methods are slow, fatiguing, and prone to error. This work introduces [...] Read more.
Foreign object debris (FOd) in rivet holes, machined holes, and fastener sites poses a critical risk to aerospace manufacturing, where current inspections rely on manual visual checks with flashlights and mirrors. These methods are slow, fatiguing, and prone to error. This work introduces HANNDI, a compact handheld inspection device that integrates controlled optics, illumination, and onboard deep learning for rapid and reliable inspection directly on the factory floor. The system performs focal sweeps, aligns and fuses the images into an all-in-focus representation, and applies a dual CNN pipeline based on the YOLO architecture: one network detects and localizes holes, while the other classifies debris. All training images were collected with the prototype, ensuring consistent geometry and lighting. On a withheld test set from a proprietary ≈3700 image dataset of aerospace assets, HANNDI achieved per-class precision and recall near 95%. An end-to-end demonstration on representative aircraft parts yielded an effective task time of 13.6 s per hole. To our knowledge, this is the first handheld automated optical inspection system that combines mechanical enforcement of imaging geometry, controlled illumination, and embedded CNN inference, providing a practical path toward robust factory floor deployment. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 13262 KB  
Article
MLP-MFF: Lightweight Pyramid Fusion MLP for Ultra-Efficient End-to-End Multi-Focus Image Fusion
by Yuze Song, Xinzhe Xie, Buyu Guo, Xiaofei Xiong and Peiliang Li
Sensors 2025, 25(16), 5146; https://doi.org/10.3390/s25165146 - 19 Aug 2025
Cited by 2 | Viewed by 1414
Abstract
Limited depth of field in modern optical imaging systems often results in partially focused images. Multi-focus image fusion (MFF) addresses this by synthesizing an all-in-focus image from multiple source images captured at different focal planes. While deep learning-based MFF methods have shown promising [...] Read more.
Limited depth of field in modern optical imaging systems often results in partially focused images. Multi-focus image fusion (MFF) addresses this by synthesizing an all-in-focus image from multiple source images captured at different focal planes. While deep learning-based MFF methods have shown promising results, existing approaches face significant challenges. Convolutional Neural Networks (CNNs) often struggle to capture long-range dependencies effectively, while Transformer and Mamba-based architectures, despite their strengths, suffer from high computational costs and rigid input size constraints, frequently necessitating patch-wise fusion during inference—a compromise that undermines the realization of a true global receptive field. To overcome these limitations, we propose MLP-MFF, a novel lightweight, end-to-end MFF network built upon the Pyramid Fusion Multi-Layer Perceptron (PFMLP) architecture. MLP-MFF is specifically designed to handle flexible input scales, efficiently learn multi-scale feature representations, and capture critical long-range dependencies. Furthermore, we introduce a Dual-Path Adaptive Multi-scale Feature-Fusion Module based on Hybrid Attention (DAMFFM-HA), which adaptively integrates hybrid attention mechanisms and allocates weights to optimally fuse multi-scale features, thereby significantly enhancing fusion performance. Extensive experiments on public multi-focus image datasets demonstrate that our proposed MLP-MFF achieves competitive, and often superior, fusion quality compared to current state-of-the-art MFF methods, all while maintaining a lightweight and efficient architecture. Full article
Show Figures

Figure 1

18 pages, 1098 KB  
Article
Enhancing Facial Expression Recognition through Light Field Cameras
by Sabrine Djedjiga Oucherif, Mohamad Motasem Nawaf, Jean-Marc Boï, Lionel Nicod, Elodie Mallor, Séverine Dubuisson and Djamal Merad
Sensors 2024, 24(17), 5724; https://doi.org/10.3390/s24175724 - 3 Sep 2024
Cited by 2 | Viewed by 2824
Abstract
In this paper, we study facial expression recognition (FER) using three modalities obtained from a light field camera: sub-aperture (SA), depth map, and all-in-focus (AiF) images. Our objective is to construct a more comprehensive and effective FER system by investigating multimodal fusion strategies. [...] Read more.
In this paper, we study facial expression recognition (FER) using three modalities obtained from a light field camera: sub-aperture (SA), depth map, and all-in-focus (AiF) images. Our objective is to construct a more comprehensive and effective FER system by investigating multimodal fusion strategies. For this purpose, we employ EfficientNetV2-S, pre-trained on AffectNet, as our primary convolutional neural network. This model, combined with a BiGRU, is used to process SA images. We evaluate various fusion techniques at both decision and feature levels to assess their effectiveness in enhancing FER accuracy. Our findings show that the model using SA images surpasses state-of-the-art performance, achieving 88.13% ± 7.42% accuracy under the subject-specific evaluation protocol and 91.88% ± 3.25% under the subject-independent evaluation protocol. These results highlight our model’s potential in enhancing FER accuracy and robustness, outperforming existing methods. Furthermore, our multimodal fusion approach, integrating SA, AiF, and depth images, demonstrates substantial improvements over unimodal models. The decision-level fusion strategy, particularly using average weights, proved most effective, achieving 90.13% ± 4.95% accuracy under the subject-specific evaluation protocol and 93.33% ± 4.92% under the subject-independent evaluation protocol. This approach leverages the complementary strengths of each modality, resulting in a more comprehensive and accurate FER system. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 16379 KB  
Article
A Novel Method for CSAR Multi-Focus Image Fusion
by Jinxing Li, Leping Chen, Daoxiang An, Dong Feng and Yongping Song
Remote Sens. 2024, 16(15), 2797; https://doi.org/10.3390/rs16152797 - 30 Jul 2024
Cited by 5 | Viewed by 1611
Abstract
Circular synthetic aperture radar (CSAR) has attracted a lot of interest, recently, for its excellent performance in civilian and military applications. However, in CSAR imaging, the result is to be defocused when the height of an object deviates from a reference height. Existing [...] Read more.
Circular synthetic aperture radar (CSAR) has attracted a lot of interest, recently, for its excellent performance in civilian and military applications. However, in CSAR imaging, the result is to be defocused when the height of an object deviates from a reference height. Existing approaches for this problem rely on digital elevation models (DEMs) for error compensation. It is difficult and costly to collect DEM using specific equipment, while the inversion of DEM based on echo is computationally intensive, and the accuracy of results is unsatisfactory. Inspired by multi-focus image fusion in optical images, a spatial-domain fusion method is proposed based on the sum of modified Laplacian (SML) and guided filter. After obtaining CSAR images in a stack of different reference heights, an all-in-focus image can be computed by the proposed method. First, the SMLs of all source images are calculated. Second, take the rule of selecting the maximum value of SML pixel by pixel to acquire initial decision maps. Secondly, a guided filter is utilized to correct the initial decision maps. Finally, fuse the source images and decision maps to obtain the result. A comparative experiment has been processed to verify the exceptional performance of the proposed method. The final processing result of real-measured CSAR data demonstrated that the proposed method is effective and practical. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

13 pages, 1274 KB  
Article
Multi-Focus Images Fusion for Fluorescence Imaging Based on Local Maximum Luminosity and Intensity Variance
by Hao Cheng, Kaijie Wu, Chaochen Gu and Dingrui Ma
Sensors 2024, 24(15), 4909; https://doi.org/10.3390/s24154909 - 29 Jul 2024
Cited by 1 | Viewed by 1766
Abstract
Due to the limitations on the depth of field of high-resolution fluorescence microscope, it is difficult to obtain an image with all objects in focus. The existing image fusion methods suffer from blocking effects or out-of-focus fluorescence. The proposed multi-focus image fusion method [...] Read more.
Due to the limitations on the depth of field of high-resolution fluorescence microscope, it is difficult to obtain an image with all objects in focus. The existing image fusion methods suffer from blocking effects or out-of-focus fluorescence. The proposed multi-focus image fusion method based on local maximum luminosity, intensity variance and the information filling method can reconstruct the all-in-focus image. Moreover, the depth of tissue’s surface can be estimated to reconstruct the 3D surface model. Full article
(This article belongs to the Special Issue Fluorescence Imaging and Sensing)
Show Figures

Figure 1

18 pages, 13550 KB  
Article
Content-Adaptive Light Field Contrast Enhancement Using Focal Stack and Hierarchical Network
by Xiangyan Guo, Jinhao Guo, Zhongyun Yuan and Yongqiang Cheng
Appl. Sci. 2024, 14(11), 4885; https://doi.org/10.3390/app14114885 - 5 Jun 2024
Viewed by 1750
Abstract
Light field (LF) cameras can capture a scene’s information from all different directions and provide comprehensive image information. However, the resulting data processing commonly encounters problems of low contrast and low image quality. In this article, we put forward a content-adaptive light field [...] Read more.
Light field (LF) cameras can capture a scene’s information from all different directions and provide comprehensive image information. However, the resulting data processing commonly encounters problems of low contrast and low image quality. In this article, we put forward a content-adaptive light field contrast enhancement scheme using a focal stack (FS) and hierarchical structure. The proposed FS set contained 300 light field images, which were captured using a Lytro-Illum camera. In addition, we integrated the classical Stanford Lytro Light Field Archive and JPEG Pleno Database. Specifically, according to the global brightness, the acquired LF images were classified into four different categories. First, we transformed the original LF FS into a depth map (DMAP) and all-in-focus (AIF) image. The image category was preliminarily determined depending on the brightness information. Then, the adaptive parameters were acquired by the corresponding multilayer perceptron (MLP) network training, which intrinsically enhanced the contrast and adjusted the light field image. Finally, our method automatically produced an enhanced FS based on the DMAP and AIF image. The experimental comparison results demonstrate that the adaptive values predicted by our MLP had high precision and approached the ground truth. Moreover, compared to existing contrast enhancement methods, our method provides a global contrast enhancement, which improves, without over-enhancing, local areas. The complexity of image processing is reduced, and real-time, adaptive LF enhancement is realized. Full article
Show Figures

Figure 1

21 pages, 3355 KB  
Article
All-in-Focus Three-Dimensional Reconstruction Based on Edge Matching for Artificial Compound Eye
by Sidong Wu, Liuquan Ren and Qingqing Yang
Appl. Sci. 2024, 14(11), 4403; https://doi.org/10.3390/app14114403 - 22 May 2024
Viewed by 1715
Abstract
An artificial compound eye consists of multiple apertures that allow for a large field of view (FOV) while maintaining a small size. Each aperture captures a sub-image, and multiple sub-images are needed to reconstruct the full FOV. The reconstruction process is depth-related due [...] Read more.
An artificial compound eye consists of multiple apertures that allow for a large field of view (FOV) while maintaining a small size. Each aperture captures a sub-image, and multiple sub-images are needed to reconstruct the full FOV. The reconstruction process is depth-related due to the parallax between adjacent apertures. This paper presents an all-in-focus 3D reconstruction method for a specific type of artificial compound eye called the electronic cluster eye (eCley). The proposed method uses edge matching to address the edge blur and large textureless areas existing in the sub-images. First, edges are extracted from each sub-image, and then a matching operator is applied to match the edges based on their shape context and intensity. This produces a sparse matching result that is then propagated to the whole image. Next, a depth consistency check and refinement method is performed to refine the depth of all sub-images. Finally, the sub-images and depth maps are merged to produce the final all-in-focus image and depth map. The experimental results and comparative analysis demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Advanced Pattern Recognition & Computer Vision)
Show Figures

Figure 1

25 pages, 20118 KB  
Article
Light Field View Synthesis Using the Focal Stack and All-in-Focus Image
by Rishabh Sharma, Stuart Perry and Eva Cheng
Sensors 2023, 23(4), 2119; https://doi.org/10.3390/s23042119 - 13 Feb 2023
Cited by 1 | Viewed by 3595
Abstract
Light field reconstruction and synthesis algorithms are essential for improving the lower spatial resolution for hand-held plenoptic cameras. Previous light field synthesis algorithms produce blurred regions around depth discontinuities, especially for stereo-based algorithms, where no information is available to fill the occluded areas [...] Read more.
Light field reconstruction and synthesis algorithms are essential for improving the lower spatial resolution for hand-held plenoptic cameras. Previous light field synthesis algorithms produce blurred regions around depth discontinuities, especially for stereo-based algorithms, where no information is available to fill the occluded areas in the light field image. In this paper, we propose a light field synthesis algorithm that uses the focal stack images and the all-in-focus image to synthesize a 9 × 9 sub-aperture view light field image. Our approach uses depth from defocus to estimate a depth map. Then, we use the depth map and the all-in-focus image to synthesize the sub-aperture views, and their corresponding depth maps by mimicking the apparent shifting of the central image according to the depth values. We handle the occluded regions in the synthesized sub-aperture views by filling them with the information recovered from the focal stack images. We also show that, if the depth levels in the image are known, we can synthesize a high-accuracy light field image with just five focal stack images. The accuracy of our approach is compared with three state-of-the-art algorithms: one non-learning and two CNN-based approaches, and the results show that our algorithm outperforms all three in terms of PSNR and SSIM metrics. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Cameras and Multi-sensors)
Show Figures

Figure 1

14 pages, 3654 KB  
Article
A Fast and Cost-Effective (FACE) Instrument Setting to Construct Focus-Extended Images
by Gilbert Audira, Ting-Wei Hsu, Kelvin H.-C. Chen, Jong-Chin Huang, Ming-Der Lin, Tzong-Rong Ger and Chung-Der Hsiao
Inventions 2022, 7(4), 110; https://doi.org/10.3390/inventions7040110 - 29 Nov 2022
Cited by 1 | Viewed by 4689
Abstract
Image stacking is a crucial method for micro or macro photography. It captures images at different focal planes and then merges them into a single, all-in-focus image with extended focus. This method has been extensively used for digital documentation by scientists working at [...] Read more.
Image stacking is a crucial method for micro or macro photography. It captures images at different focal planes and then merges them into a single, all-in-focus image with extended focus. This method has been extensively used for digital documentation by scientists working at museums or research institutions. However, the traditional image stacking method relies on expensive instruments to conduct precise image stacking using a computer-based stepper motor controller. In this study, we reported how to conduct image focus extensions with comparable quality to those done by a motorized stepper using a cost-effective instrument setting and an efficient manual stacking method. This method provides a shorter operation time and capability to capture images of living objects and high flexibility in obtaining the images of objects from cm to mm scale. However, it also has some limitations, including the inability to control aperture and exposure time, relatively short working distance at high magnification, requires additional steps to convert the video into images, and heavily relies on the user’s manual observation prior to a video recording. Nevertheless, the authors believe that the current method can be applied as an alternative method to conduct image stacking. The development of such an instrument and method offers a promising avenue for scientists to perform image stacking with greater flexibility and speed in macro photography. Full article
(This article belongs to the Collection Feature Innovation Papers)
Show Figures

Figure 1

21 pages, 15963 KB  
Article
Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe
by Shuangjie Wang, Qiang Xing, Haili Xu, Guyue Lu and Jiajia Wang
Sensors 2022, 22(19), 7634; https://doi.org/10.3390/s22197634 - 9 Oct 2022
Cited by 2 | Viewed by 2554
Abstract
To avoid depth-of-field mismatches caused by the changes in pipe structure and image overexposures caused by highly reflective surfaces while radial imaging irregular pipes, this paper proposes a novel all-in-focus, adaptable, and low scene-coupling method that suppresses overexposures in support of fault detection. [...] Read more.
To avoid depth-of-field mismatches caused by the changes in pipe structure and image overexposures caused by highly reflective surfaces while radial imaging irregular pipes, this paper proposes a novel all-in-focus, adaptable, and low scene-coupling method that suppresses overexposures in support of fault detection. Firstly, the pipeline’s radial depth distribution data are obtained by sensors, and an optimal all-in-focus imaging scheme is established by combining camera parameters. Secondly, using digital imaging technology, the high reflection effect produced by disparate light sources is comprehensively evaluated for overexposure suppression. Thirdly, a device is designed for imaging non-Lambertian free-form surface scenes under low illumination, providing the sequence images needed for the next step. Lastly, specific digital fusions are made to the sequential images to obtain an all-in-focus final image without overexposure. An image-quality analysis method is then used to measure the efficacy of the system in obtaining the characteristic information of the inner surfaces of an irregular pipe. Results of the experiment show that the method and device used are able to distinguish small 0.5 mm wide lines ranging from 40–878 mm depth and are capable of providing efficient image support for defect inspection of irregular pipes and free-form surfaces amongst other irregular surfaces. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

28 pages, 8186 KB  
Article
Improved Procedure for Multi-Focus Images Using Image Fusion with qshiftN DTCWT and MPCA in Laplacian Pyramid Domain
by Chinnem Rama Mohan, Kuldeep Chouhan, Ranjeet Kumar Rout, Kshira Sagar Sahoo, Noor Zaman Jhanjhi, Ashraf Osman Ibrahim and Abdelzahir Abdelmaboud
Appl. Sci. 2022, 12(19), 9495; https://doi.org/10.3390/app12199495 - 22 Sep 2022
Cited by 14 | Viewed by 3357
Abstract
Multi-focus image fusion (MIF) uses fusion rules to combine two or more images of the same scene with various focus values into a fully focused image. An all-in-focus image refers to a fully focused image that is more informative and useful for visual [...] Read more.
Multi-focus image fusion (MIF) uses fusion rules to combine two or more images of the same scene with various focus values into a fully focused image. An all-in-focus image refers to a fully focused image that is more informative and useful for visual perception. A fused image with high quality is essential for maintaining shift-invariant and directional selectivity characteristics of the image. Traditional wavelet-based fusion methods, in turn, create ringing distortions in the fused image due to a lack of directional selectivity and shift-invariance. In this paper, a classical MIF system based on quarter shift dual-tree complex wavelet transform (qshiftN DTCWT) and modified principal component analysis (MPCA) in the laplacian pyramid (LP) domain is proposed to extract the focused image from multiple source images. In the proposed fusion approach, the LP first decomposes the multi-focus source images into low-frequency (LF) components and high-frequency (HF) components. Then, qshiftN DTCWT is used to fuse low and high-frequency components to produce a fused image. Finally, to improve the effectiveness of the qshiftN DTCWT and LP-based method, the MPCA algorithm is utilized to generate an all-in-focus image. Due to its directionality, and its shift-invariance, this transform can provide high-quality information in a fused image. Experimental results demonstrate that the proposed method outperforms many state-of-the-art techniques in terms of visual and quantitative evaluations. Full article
Show Figures

Figure 1

18 pages, 80059 KB  
Article
All-In-Focus Polarimetric Imaging Based on an Integrated Plenoptic Camera with a Key Electrically Tunable LC Device
by Mingce Chen, Zhexun Li, Mao Ye, Taige Liu, Chai Hu, Jiashuo Shi, Kewei Liu, Zhe Wang and Xinyu Zhang
Micromachines 2022, 13(2), 192; https://doi.org/10.3390/mi13020192 - 26 Jan 2022
Cited by 5 | Viewed by 3193
Abstract
In this paper, a prototyped plenoptic camera based on a key electrically tunable liquid-crystal (LC) device for all-in-focus polarimetric imaging is proposed. By using computer numerical control machining and 3D printing, the proposed imaging architecture can be integrated into a hand-held prototyped plenoptic [...] Read more.
In this paper, a prototyped plenoptic camera based on a key electrically tunable liquid-crystal (LC) device for all-in-focus polarimetric imaging is proposed. By using computer numerical control machining and 3D printing, the proposed imaging architecture can be integrated into a hand-held prototyped plenoptic camera so as to greatly improve the applicability for outdoor imaging measurements. Compared with previous square-period liquid-crystal microlens arrays (LCMLA), the utilized hexagonal-period LCMLA has remarkably increased the light utilization rate by ~15%. Experiments demonstrate that the proposed imaging approach can simultaneously realize both the plenoptic and polarimetric imaging without any macroscopic moving parts. With the depth-based rendering method, both the all-in-focus images and the all-in-focus degree of linear polarization (DoLP) images can be obtained efficiently. Due to the large depth-of-field advantage of plenoptic cameras, the proposed camera enables polarimetric imaging in a larger depth range than conventional 2D polarimetric cameras. Currently, the raw light field images with three polarization states including I0 and I60 and I120 can be captured by the proposed imaging architecture, with a switching time of several tens of milliseconds. Some local patterns which are selected as interested target features can be effectively suppressed or obviously enhanced by switching the polarization state mentioned. According to experiments, the visibility in scattering medium can also be apparently improved. It can be expected that the proposed polarimetric imaging approach will exhibit an excellent development potential. Full article
(This article belongs to the Special Issue Optics and Photonics in Micromachines)
Show Figures

Figure 1

14 pages, 5958 KB  
Communication
Multi-Focus Image Fusion Using Focal Area Extraction in a Large Quantity of Microscopic Images
by Jiyoung Lee, Seunghyun Jang, Jungbin Lee, Taehan Kim, Seonghan Kim, Jongbum Seo, Ki Hean Kim and Sejung Yang
Sensors 2021, 21(21), 7371; https://doi.org/10.3390/s21217371 - 5 Nov 2021
Cited by 3 | Viewed by 3089
Abstract
The non-invasive examination of conjunctival goblet cells using a microscope is a novel procedure for the diagnosis of ocular surface diseases. However, it is difficult to generate an all-in-focus image due to the curvature of the eyes and the limited focal depth of [...] Read more.
The non-invasive examination of conjunctival goblet cells using a microscope is a novel procedure for the diagnosis of ocular surface diseases. However, it is difficult to generate an all-in-focus image due to the curvature of the eyes and the limited focal depth of the microscope. The microscope acquires multiple images with the axial translation of focus, and the image stack must be processed. Thus, we propose a multi-focus image fusion method to generate an all-in-focus image from multiple microscopic images. First, a bandpass filter is applied to the source images and the focus areas are extracted using Laplacian transformation and thresholding with a morphological operation. Next, a self-adjusting guided filter is applied for the natural connections between local focus images. A window-size-updating method is adopted in the guided filter to reduce the number of parameters. This paper presents a novel algorithm that can operate for a large quantity of images (10 or more) and obtain an all-in-focus image. To quantitatively evaluate the proposed method, two different types of evaluation metrics are used: “full-reference” and “no-reference”. The experimental results demonstrate that this algorithm is robust to noise and capable of preserving local focus information through focal area extraction. Additionally, the proposed method outperforms state-of-the-art approaches in terms of both visual effects and image quality assessments. Full article
Show Figures

Figure 1

22 pages, 5797 KB  
Article
Exploiting Superpixels for Multi-Focus Image Fusion
by Areeba Ilyas, Muhammad Shahid Farid, Muhammad Hassan Khan and Marcin Grzegorzek
Entropy 2021, 23(2), 247; https://doi.org/10.3390/e23020247 - 21 Feb 2021
Cited by 9 | Viewed by 4557
Abstract
Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. [...] Read more.
Multi-focus image fusion is the process of combining focused regions of two or more images to obtain a single all-in-focus image. It is an important research area because a fused image is of high quality and contains more details than the source images. This makes it useful for numerous applications in image enhancement, remote sensing, object recognition, medical imaging, etc. This paper presents a novel multi-focus image fusion algorithm that proposes to group the local connected pixels with similar colors and patterns, usually referred to as superpixels, and use them to separate the focused and de-focused regions of an image. We note that these superpixels are more expressive than individual pixels, and they carry more distinctive statistical properties when compared with other superpixels. The statistical properties of superpixels are analyzed to categorize the pixels as focused or de-focused and to estimate a focus map. A spatial consistency constraint is ensured on the initial focus map to obtain a refined map, which is used in the fusion rule to obtain a single all-in-focus image. Qualitative and quantitative evaluations are performed to assess the performance of the proposed method on a benchmark multi-focus image fusion dataset. The results show that our method produces better quality fused images than existing image fusion techniques. Full article
(This article belongs to the Special Issue Advances in Image Fusion)
Show Figures

Figure 1

20 pages, 7696 KB  
Article
A Novel Multi-Focus Image Fusion Network with U-Shape Structure
by Tao Pan, Jiaqin Jiang, Jian Yao, Bin Wang and Bin Tan
Sensors 2020, 20(14), 3901; https://doi.org/10.3390/s20143901 - 13 Jul 2020
Cited by 10 | Viewed by 3550
Abstract
Multi-focus image fusion has become a very practical image processing task. It uses multiple images focused on various depth planes to create an all-in-focus image. Although extensive studies have been produced, the performance of existing methods is still limited by the inaccurate detection [...] Read more.
Multi-focus image fusion has become a very practical image processing task. It uses multiple images focused on various depth planes to create an all-in-focus image. Although extensive studies have been produced, the performance of existing methods is still limited by the inaccurate detection of the focus regions for fusion. Therefore, in this paper, we proposed a novel U-shape network which can generate an accurate decision map for the multi-focus image fusion. The Siamese encoder of our U-shape network can preserve the low-level cues with rich spatial details and high-level semantic information from the source images separately. Moreover, we introduce the ResBlocks to expand the receptive field, which can enhance the ability of our network to distinguish between focus and defocus regions. Moreover, in the bridge stage between the encoder and decoder, the spatial pyramid pooling is adopted as a global perception fusion module to capture sufficient context information for the learning of the decision map. Finally, we use a hybrid loss that combines the binary cross-entropy loss and the structural similarity loss for supervision. Extensive experiments have demonstrated that the proposed method can achieve the state-of-the-art performance. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop