Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = Shearlet transform

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3908 KB  
Article
Transform Domain Based GAN with Deep Multi-Scale Features Fusion for Medical Image Super-Resolution
by Huayong Yang, Qingsong Wei and Yu Sang
Electronics 2025, 14(18), 3726; https://doi.org/10.3390/electronics14183726 - 20 Sep 2025
Viewed by 297
Abstract
High-resolution (HR) medical images provide clearer anatomical details and facilitate early disease diagnosis, yet acquiring HR scans is often limited by imaging conditions, device capabilities, and patient factors. We propose a transform domain deep multiscale feature fusion generative adversarial network (MSFF-GAN) for medical [...] Read more.
High-resolution (HR) medical images provide clearer anatomical details and facilitate early disease diagnosis, yet acquiring HR scans is often limited by imaging conditions, device capabilities, and patient factors. We propose a transform domain deep multiscale feature fusion generative adversarial network (MSFF-GAN) for medical image super-resolution (SR). Considering the advantages of generative adversarial networks (GANs) and convolutional neural networks (CNNs), MSFF-GAN integrates a deep multi-scale convolution network into the GAN generator, which is composed primarily of a series of cascaded multi-scale feature extraction blocks in a coarse-to-fine manner to restore the medical images. Two tailored blocks are designed: a multiscale information distillation (MSID) block that adaptively captures long- and short-path features across scales, and a granular multiscale (GMS) block that expands receptive fields at fine granularity to strengthen multiscale feature extraction with reduced computational cost. Unlike conventional methods that predict HR images directly in the spatial domain, which often yield excessively smoothed outputs with missing textures, we formulate SR as the prediction of coefficients in the non-subsampled shearlet transform (NSST) domain. This transform domain modeling enables better preservation of global anatomical structure and local texture details. The predicted coefficients are inverted to reconstruct HR images, and the transform domain subbands are also fed to the discriminator to enhance its discrimination ability and improve perceptual fidelity. Extensive experiments on medical image datasets demonstrate that MSFF-GAN outperforms state-of-the-art approaches in structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR), while more effectively preserving global anatomy and fine textures. These results validate the effectiveness of combining multiscale feature fusion with transform domain prediction for high-quality medical image super-resolution. Full article
(This article belongs to the Special Issue New Trends in AI-Assisted Computer Vision)
Show Figures

Figure 1

18 pages, 19035 KB  
Article
Multiscale 3-D Stochastic Inversion of Frequency-Domain Airborne Electromagnetic Data
by Yang Su, Xiuyan Ren, Changchun Yin, Libao Wang, Yunhe Liu, Bo Zhang and Luyuan Wang
Remote Sens. 2024, 16(16), 3070; https://doi.org/10.3390/rs16163070 - 21 Aug 2024
Viewed by 1144
Abstract
In mineral, environmental, and engineering explorations, we frequently encounter geological bodies with varied sizes, depths, and conductivity contrasts with surround rocks and try to interpret them with single survey data. The conventional three-dimensional (3-D) inversions significantly rely on the size of the grids, [...] Read more.
In mineral, environmental, and engineering explorations, we frequently encounter geological bodies with varied sizes, depths, and conductivity contrasts with surround rocks and try to interpret them with single survey data. The conventional three-dimensional (3-D) inversions significantly rely on the size of the grids, which should be smaller than the smallest geological target to achieve a good recovery to anomalous electric conductivity. However, this will create a large amount of unknowns to be solved and cost significant time and memory. In this paper, we present a multi-scale (MS) stochastic inversion scheme based on shearlet transform for airborne electromagnetic (AEM) data. The shearlet possesses the features of multi-direction and multi-scale, allowing it to effectively characterize the underground conductivity distribution in the transformed domain. To address the practical implementation of the method, we use a compressed sensing method in the forward modeling and sensitivity calculation, and employ a preconditioner that accounts for both the sampling rate and gradient noise to achieve a fast stochastic 3-D inversion. By gradually updating the coefficients from the coarse to fine scales, we obtain the multi-scale information on the underground electric conductivity. The synthetic data inversion shows that the proposed MS method can better recover multiple geological bodies with different sizes and depths with less time consumption. Finally, we conduct 3-D inversions of a field dataset acquired from Byneset, Norway. The results show very good agreement with the geological information. Full article
Show Figures

Figure 1

19 pages, 13105 KB  
Article
Enhanced Offshore Wind Farm Geophysical Surveys: Shearlet-Sparse Regularization in Multi-Channel Predictive Deconvolution
by Yang Zhang, Deli Wang, Bin Hu, Junming Zhang, Xiangbo Gong and Yifei Chen
Remote Sens. 2024, 16(16), 2935; https://doi.org/10.3390/rs16162935 - 10 Aug 2024
Viewed by 1608
Abstract
This study introduces a novel multi-channel predictive deconvolution method enhanced by Shearlet-based sparse regularization, aimed at improving the accuracy and stability of subsurface seismic imaging, particularly in offshore wind farm site assessments. Traditional multi-channel predictive deconvolution techniques often struggle with noise interference, limiting [...] Read more.
This study introduces a novel multi-channel predictive deconvolution method enhanced by Shearlet-based sparse regularization, aimed at improving the accuracy and stability of subsurface seismic imaging, particularly in offshore wind farm site assessments. Traditional multi-channel predictive deconvolution techniques often struggle with noise interference, limiting their effectiveness. By integrating Shearlet transform into the multi-channel predictive framework, our approach leverages its directional and multiscale properties to enhance sparsity and directionality in seismic data representation. Tests on both synthetic and field data demonstrate that our method not only provides more accurate seismic images but also shows significant resilience to noise, compared to conventional methods. These findings suggest that the proposed technique can substantially improve geological feature identification and has great potential for enhancing the efficiency of seabed surveys in marine renewable energy development. Full article
Show Figures

Figure 1

16 pages, 13327 KB  
Article
Fusion of Infrared and Visible Light Images Based on Improved Adaptive Dual-Channel Pulse Coupled Neural Network
by Bin Feng, Chengbo Ai and Haofei Zhang
Electronics 2024, 13(12), 2337; https://doi.org/10.3390/electronics13122337 - 14 Jun 2024
Cited by 3 | Viewed by 1418
Abstract
The pulse-coupled neural network (PCNN), due to its effectiveness in simulating the mammalian visual system to perceive and understand visual information, has been widely applied in the fields of image segmentation and image fusion. To address the issues of low contrast and the [...] Read more.
The pulse-coupled neural network (PCNN), due to its effectiveness in simulating the mammalian visual system to perceive and understand visual information, has been widely applied in the fields of image segmentation and image fusion. To address the issues of low contrast and the loss of detail information in infrared and visible light image fusion, this paper proposes a novel image fusion method based on an improved adaptive dual-channel PCNN model in the non-subsampled shearlet transform (NSST) domain. Firstly, NSST is used to decompose the infrared and visible light images into a series of high-pass sub-bands and a low-pass sub-band, respectively. Next, the PCNN models are stimulated using the weighted sum of the eight-neighborhood Laplacian of the high-pass sub-bands and the energy activity of the low-pass sub-band. The high-pass sub-bands are fused using local structural information as the basis for the linking strength for the PCNN, while the low-pass sub-band is fused using a linking strength based on multiscale morphological gradients. Finally, the fused high-pass and low-pass sub-bands are reconstructed to obtain the fused image. Comparative experiments demonstrate that, subjectively, this method effectively enhances the contrast of scenes and targets while preserving the detail information of the source images. Compared to the best mean values of the objective evaluation metrics of the compared methods, the proposed method shows improvements of 2.35%, 3.49%, and 11.60% in information entropy, mutual information, and standard deviation, respectively. Full article
(This article belongs to the Special Issue Machine Learning Methods for Solving Optical Imaging Problems)
Show Figures

Figure 1

22 pages, 18573 KB  
Article
A Multi-Scale Fusion Strategy for Side Scan Sonar Image Correction to Improve Low Contrast and Noise Interference
by Ping Zhou, Jifa Chen, Pu Tang, Jianjun Gan and Hongmei Zhang
Remote Sens. 2024, 16(10), 1752; https://doi.org/10.3390/rs16101752 - 15 May 2024
Cited by 5 | Viewed by 2258
Abstract
Side scan sonar images have great application prospects in underwater surveys, target detection, and engineering activities. However, the acquired sonar images exhibit low illumination, scattered noise, distorted outlines, and unclear edge textures due to the complicated undersea environment and intrinsic device flaws. Hence, [...] Read more.
Side scan sonar images have great application prospects in underwater surveys, target detection, and engineering activities. However, the acquired sonar images exhibit low illumination, scattered noise, distorted outlines, and unclear edge textures due to the complicated undersea environment and intrinsic device flaws. Hence, this paper proposes a multi-scale fusion strategy for side scan sonar (SSS) image correction to improve the low contrast and noise interference. Initially, an SSS image was decomposed into low and high frequency sub-bands via the non-subsampled shearlet transform (NSST). Then, modified multi-scale retinex (MMSR) was employed to enhance the contrast of the low frequency sub-band. Next, sparse dictionary learning (SDL) was utilized to eliminate high frequency noise. Finally, the process of NSST reconstruction was completed by fusing the emerging low and high frequency sub-band images to generate a new sonar image. The experimental results demonstrate that the target features, underwater terrain, and edge contours could be clearly displayed in the image corrected by the multi-scale fusion strategy when compared to eight correction techniques: BPDHE, MSRCR, NPE, ALTM, LIME, FE, WT, and TVRLRA. Effective control was achieved over the speckle noise of the sonar image. Furthermore, the AG, STD, and E values illustrated the delicacy and contrast of the corrected images processed by the proposed strategy. The PSNR value revealed that the proposed strategy outperformed the advanced TVRLRA technology in terms of filtering performance by at least 8.8%. It can provide sonar imagery that is appropriate for various circumstances. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing IV)
Show Figures

Graphical abstract

19 pages, 27088 KB  
Article
Research on Multi-Scale Fusion Method for Ancient Bronze Ware X-ray Images in NSST Domain
by Meng Wu, Lei Yang and Ruochang Chai
Appl. Sci. 2024, 14(10), 4166; https://doi.org/10.3390/app14104166 - 14 May 2024
Cited by 1 | Viewed by 1300
Abstract
X-ray imaging is a valuable non-destructive tool for examining bronze wares, but the complexity of the coverings of bronze wares and the limitations of single-energy imaging techniques often obscure critical details, such as lesions and ornamentation. Therefore, multiple imaging is required to fully [...] Read more.
X-ray imaging is a valuable non-destructive tool for examining bronze wares, but the complexity of the coverings of bronze wares and the limitations of single-energy imaging techniques often obscure critical details, such as lesions and ornamentation. Therefore, multiple imaging is required to fully present the key information of bronze artifacts, which affects the complete presentation of information and increases the difficulty of analysis and interpretation. Using high-performance image fusion technology to fuse X-ray images of different energies into one image can effectively solve this problem. However, there is currently no specialized method for the fusion of images of bronze artifacts. Considering the special requirements for the restoration of bronze artifacts and the existing fusion framework, this paper proposes a new method. It is a novel multi-scale morphological gradient and local topology-coupled neural P systems approach within the Non-Subsampled Shearlet Transform domain. It addresses the absence of a specialized method for image fusion of bronze artifacts. The method proposed in this paper is compared with eight high-performance fusion methods and validated using a total of six evaluation metrics. The results demonstrate the significant theoretical and practical potential of this method for advancing the analysis and preservation of cultural heritage artifacts. Full article
Show Figures

Figure 1

19 pages, 2006 KB  
Article
Shearlet Transform Applied to a Prostate Cancer Radiomics Analysis on MR Images
by Rosario Corso, Alessandro Stefano, Giuseppe Salvaggio and Albert Comelli
Mathematics 2024, 12(9), 1296; https://doi.org/10.3390/math12091296 - 25 Apr 2024
Cited by 11 | Viewed by 1822
Abstract
For decades, wavelet theory has attracted interest in several fields in dealing with signals. Nowadays, it is acknowledged that it is not very suitable to face aspects of multidimensional data like singularities and this has led to the development of other mathematical tools. [...] Read more.
For decades, wavelet theory has attracted interest in several fields in dealing with signals. Nowadays, it is acknowledged that it is not very suitable to face aspects of multidimensional data like singularities and this has led to the development of other mathematical tools. A recent application of wavelet theory is in radiomics, an emerging field aiming to improve diagnostic, prognostic and predictive analysis of various cancer types through the analysis of features extracted from medical images. In this paper, for a radiomics study of prostate cancer with magnetic resonance (MR) images, we apply a similar but more sophisticated tool, namely the shearlet transform which, in contrast to the wavelet transform, allows us to examine variations along more orientations. In particular, we conduct a parallel radiomics analysis based on the two different transformations and highlight a better performance (evaluated in terms of statistical measures) in the use of the shearlet transform (in absolute value). The results achieved suggest taking the shearlet transform into consideration for radiomics studies in other contexts. Full article
Show Figures

Figure 1

20 pages, 13665 KB  
Article
ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion
by Hanrui Chen, Lei Deng, Lianqing Zhu and Mingli Dong
Sensors 2023, 23(19), 8071; https://doi.org/10.3390/s23198071 - 25 Sep 2023
Cited by 3 | Viewed by 2050
Abstract
Infrared and visible image fusion (IVIF) aims to render fused images that maintain the merits of both modalities. To tackle the challenge in fusing cross-modality information and avoiding texture loss in IVIF, we propose a novel edge-consistent and correlation-driven fusion framework (ECFuse). This [...] Read more.
Infrared and visible image fusion (IVIF) aims to render fused images that maintain the merits of both modalities. To tackle the challenge in fusing cross-modality information and avoiding texture loss in IVIF, we propose a novel edge-consistent and correlation-driven fusion framework (ECFuse). This framework leverages our proposed edge-consistency fusion module to maintain rich and coherent edges and textures, simultaneously introducing a correlation-driven deep learning network to fuse the cross-modality global features and modality-specific local features. Firstly, the framework employs a multi-scale transformation (MST) to decompose the source images into base and detail layers. Then, the edge-consistent fusion module fuses detail layers while maintaining the coherence of edges through consistency verification. A correlation-driven fusion network is proposed to fuse the base layers containing both modalities’ main features in the transformation domain. Finally, the final fused spatial image is reconstructed by inverse MST. We conducted experiments to compare our ECFuse with both conventional and deep leaning approaches on TNO, LLVIP and M3FD datasets. The qualitative and quantitative evaluation results demonstrate the effectiveness of our framework. We also show that ECFuse can boost the performance in downstream infrared–visible object detection in a unified benchmark. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

23 pages, 11979 KB  
Article
Multi-Focus Image Fusion via PAPCNN and Fractal Dimension in NSST Domain
by Ming Lv, Zhenhong Jia, Liangliang Li and Hongbing Ma
Mathematics 2023, 11(18), 3803; https://doi.org/10.3390/math11183803 - 5 Sep 2023
Cited by 4 | Viewed by 1622
Abstract
Multi-focus image fusion is a popular technique for generating a full-focus image, where all objects in the scene are clear. In order to achieve a clearer and fully focused fusion effect, in this paper, the multi-focus image fusion method based on the parameter-adaptive [...] Read more.
Multi-focus image fusion is a popular technique for generating a full-focus image, where all objects in the scene are clear. In order to achieve a clearer and fully focused fusion effect, in this paper, the multi-focus image fusion method based on the parameter-adaptive pulse-coupled neural network and fractal dimension in the nonsubsampled shearlet transform domain was developed. The parameter-adaptive pulse coupled neural network-based fusion rule was used to merge the low-frequency sub-bands, and the fractal dimension-based fusion rule via the multi-scale morphological gradient was used to merge the high-frequency sub-bands. The inverse nonsubsampled shearlet transform was used to reconstruct the fused coefficients, and the final fused multi-focus image was generated. We conducted comprehensive evaluations of our algorithm using the public Lytro dataset. The proposed method was compared with state-of-the-art fusion algorithms, including traditional and deep-learning-based approaches. The quantitative and qualitative evaluations demonstrated that our method outperformed other fusion algorithms, as evidenced by the metrics data such as QAB/F, QE, QFMI, QG, QNCIE, QP, QMI, QNMI, QY, QAG, QPSNR, and QMSE. These results highlight the clear advantages of our proposed technique in multi-focus image fusion, providing a significant contribution to the field. Full article
Show Figures

Figure 1

18 pages, 9286 KB  
Article
Combination of Fast Finite Shear Wave Transform and Optimized Deep Convolutional Neural Network: A Better Method for Noise Reduction of Wetland Test Images
by Xiangdong Cui, Huajun Bai, Ying Zhao and Zhen Wang
Electronics 2023, 12(17), 3557; https://doi.org/10.3390/electronics12173557 - 23 Aug 2023
Cited by 1 | Viewed by 1257
Abstract
Wetland experimental images are often affected by factors such as waves, weather conditions, and lighting, resulting in severe noise in the images. In order to improve the quality and accuracy of wetland experimental images, this paper proposes a wetland experimental image denoising method [...] Read more.
Wetland experimental images are often affected by factors such as waves, weather conditions, and lighting, resulting in severe noise in the images. In order to improve the quality and accuracy of wetland experimental images, this paper proposes a wetland experimental image denoising method based on the fast finite shearlet transform (FFST) and a deep convolutional neural network model. The FFST is used to decompose the wetland experimental images, which can capture the features of different frequencies and directions in the images. The network model has a deep network structure and powerful feature extraction capabilities. By training the model, it can learn the relevant features in the wetland experimental images, thereby achieving denoising effects. The experimental results show that, compared to traditional denoising methods, the proposed method in this paper can effectively remove noise from wetland experimental images while preserving the details and textures of the images. This is of great significance for improving the quality and accuracy of wetland experimental images. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

21 pages, 4502 KB  
Article
TDFusion: When Tensor Decomposition Meets Medical Image Fusion in the Nonsubsampled Shearlet Transform Domain
by Rui Zhang, Zhongyang Wang, Haoze Sun, Lizhen Deng and Hu Zhu
Sensors 2023, 23(14), 6616; https://doi.org/10.3390/s23146616 - 23 Jul 2023
Cited by 8 | Viewed by 2128
Abstract
In this paper, a unified optimization model for medical image fusion based on tensor decomposition and the non-subsampled shearlet transform (NSST) is proposed. The model is based on the NSST method and the tensor decomposition method to fuse the high-frequency (HF) and low-frequency [...] Read more.
In this paper, a unified optimization model for medical image fusion based on tensor decomposition and the non-subsampled shearlet transform (NSST) is proposed. The model is based on the NSST method and the tensor decomposition method to fuse the high-frequency (HF) and low-frequency (LF) parts of two source images to obtain a mixed-frequency fused image. In general, we integrate low-frequency and high-frequency information from the perspective of tensor decomposition (TD) fusion. Due to the structural differences between the high-frequency and low-frequency representations, potential information loss may occur in the fused images. To address this issue, we introduce a joint static and dynamic guidance (JSDG) technique to complement the HF/LF information. To improve the result of the fused images, we combine the alternating direction method of multipliers (ADMM) algorithm with the gradient descent method for parameter optimization. Finally, the fused images are reconstructed by applying the inverse NSST to the fused high-frequency and low-frequency bands. Extensive experiments confirm the superiority of our proposed TDFusion over other comparison methods. Full article
(This article belongs to the Special Issue Computer-Aided Diagnosis Based on AI and Sensor Technology)
Show Figures

Figure 1

15 pages, 2065 KB  
Article
Multimodality Medical Image Fusion Using Clustered Dictionary Learning in Non-Subsampled Shearlet Transform
by Manoj Diwakar, Prabhishek Singh, Ravinder Singh, Dilip Sisodia, Vijendra Singh, Ankur Maurya, Seifedine Kadry and Lukas Sevcik
Diagnostics 2023, 13(8), 1395; https://doi.org/10.3390/diagnostics13081395 - 12 Apr 2023
Cited by 17 | Viewed by 2926
Abstract
Imaging data fusion is becoming a bottleneck in clinical applications and translational research in medical imaging. This study aims to incorporate a novel multimodality medical image fusion technique into the shearlet domain. The proposed method uses the non-subsampled shearlet transform (NSST) to extract [...] Read more.
Imaging data fusion is becoming a bottleneck in clinical applications and translational research in medical imaging. This study aims to incorporate a novel multimodality medical image fusion technique into the shearlet domain. The proposed method uses the non-subsampled shearlet transform (NSST) to extract both low- and high-frequency image components. A novel approach is proposed for fusing low-frequency components using a modified sum-modified Laplacian (MSML)-based clustered dictionary learning technique. In the NSST domain, directed contrast can be used to fuse high-frequency coefficients. Using the inverse NSST method, a multimodal medical image is obtained. Compared to state-of-the-art fusion techniques, the proposed method provides superior edge preservation. According to performance metrics, the proposed method is shown to be approximately 10% better than existing methods in terms of standard deviation, mutual information, etc. Additionally, the proposed method produces excellent visual results regarding edge preservation, texture preservation, and more information. Full article
Show Figures

Figure 1

16 pages, 6327 KB  
Article
Research on Multi-Scale Feature Extraction and Working Condition Classification Algorithm of Lead-Zinc Ore Flotation Foam
by Xiaoping Jiang, Huilin Zhao, Junwei Liu, Suliang Ma and Mingzhen Hu
Appl. Sci. 2023, 13(6), 4028; https://doi.org/10.3390/app13064028 - 22 Mar 2023
Cited by 6 | Viewed by 2191
Abstract
To address the problems of difficult online monitoring, low recognition efficiency and the subjectivity of work condition identification in mineral flotation processes, a foam flotation performance state recognition method is developed to improve the issues mentioned above. This method combines multi-dimensional CNN (convolutional [...] Read more.
To address the problems of difficult online monitoring, low recognition efficiency and the subjectivity of work condition identification in mineral flotation processes, a foam flotation performance state recognition method is developed to improve the issues mentioned above. This method combines multi-dimensional CNN (convolutional neural networks) characteristics and improved LBP (local binary patterns) characteristics. We have divided the foam flotation conditions into six categories. First, the multi-directional and multi-scale selectivity and anisotropy of nonsubsampled shearlet transform (NSST) are used to decompose the flotation foam images at multiple frequency scales, and a multi-channel CNN network is designed to extract static features from the images at different frequencies. Then, the flotation video image sequences are rotated and dynamic features are extracted by the LBP-TOP (local binary patterns from three orthogonal planes), and the CNN-extracted static picture features are fused with the LBP dynamic video features. Finally, classification decisions are made by a PSO-RVFLNs (particle swarm optimization-random vector functional link networks) algorithm to accurately identify the foam flotation performance states. Experimental results show that the detection accuracy of the new method is significantly improved by 4.97% and 6.55%, respectively, compared to the single CNN algorithm and the traditional LBP algorithm, respectively. The accuracy of flotation performance state classification was as high as 95.17%, and the method reduced manual intervention, thus improving production efficiency. Full article
Show Figures

Figure 1

18 pages, 5006 KB  
Article
Classification of Mineral Foam Flotation Conditions Based on Multi-Modality Image Fusion
by Xiaoping Jiang, Huilin Zhao and Junwei Liu
Appl. Sci. 2023, 13(6), 3512; https://doi.org/10.3390/app13063512 - 9 Mar 2023
Cited by 6 | Viewed by 2085
Abstract
Accurate and rapid identification of mineral foam flotation states can increase mineral utilization and reduce the consumption of reagents. The traditional flotation process concentrates on extracting foam features from a single-modality foam image, and the accuracy is undesirable once problems such as insufficient [...] Read more.
Accurate and rapid identification of mineral foam flotation states can increase mineral utilization and reduce the consumption of reagents. The traditional flotation process concentrates on extracting foam features from a single-modality foam image, and the accuracy is undesirable once problems such as insufficient image clarity or poor foam boundaries are encountered. In this work, a classification method based on multi-modality image fusion and CNN-PCA-SVM is proposed for work condition recognition of visible and infrared gray foam images. Specifically, the visible and infrared gray images are fused in the non-subsampled shearlet transform (NSST) domain using the parameter adaptive pulse coupled neural network (PAPCNN) method and the image quality detection method for high and low frequencies, respectively. The convolution neural network (CNN) is used as a trainable feature extractor to process the fused foam images, the principal component analysis (PCA) reduces feature data, and the support vector machine (SVM) is used as a recognizer to classify the foam flotation condition. After experiments, this model can fuse the foam images and recognize the flotation condition classification with high accuracy. Full article
Show Figures

Figure 1

17 pages, 17089 KB  
Article
Sparse Representation-Based Multi-Focus Image Fusion Method via Local Energy in Shearlet Domain
by Liangliang Li, Ming Lv, Zhenhong Jia and Hongbing Ma
Sensors 2023, 23(6), 2888; https://doi.org/10.3390/s23062888 - 7 Mar 2023
Cited by 28 | Viewed by 2899
Abstract
Multi-focus image fusion plays an important role in the application of computer vision. In the process of image fusion, there may be blurring and information loss, so it is our goal to obtain high-definition and information-rich fusion images. In this paper, a novel [...] Read more.
Multi-focus image fusion plays an important role in the application of computer vision. In the process of image fusion, there may be blurring and information loss, so it is our goal to obtain high-definition and information-rich fusion images. In this paper, a novel multi-focus image fusion method via local energy and sparse representation in the shearlet domain is proposed. The source images are decomposed into low- and high-frequency sub-bands according to the shearlet transform. The low-frequency sub-bands are fused by sparse representation, and the high-frequency sub-bands are fused by local energy. The inverse shearlet transform is used to reconstruct the fused image. The Lytro dataset with 20 pairs of images is used to verify the proposed method, and 8 state-of-the-art fusion methods and 8 metrics are used for comparison. According to the experimental results, our method can generate good performance for multi-focus image fusion. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop