Next Article in Journal
Relation-Guided Embedding Transductive Propagation Network with Residual Correction for Few-Shot SAR ATR
Previous Article in Journal
Improved Clutter Suppression and Detection of Moving Target with a Fully Polarimetric Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weighted Sparse Image Quality Restoration Algorithm for Small-Pixel High-Resolution Remote Sensing Data

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(17), 2979; https://doi.org/10.3390/rs17172979
Submission received: 23 June 2025 / Revised: 18 August 2025 / Accepted: 26 August 2025 / Published: 27 August 2025

Abstract

The demand for high-spatial-resolution optical remote sensing applications is increasing, while conventional high-resolution optical payloads face limitations in widespread application due to their large size and high manufacturing costs. With the rapid development of image processing technology, we adopt a method combining small-pixel detector sampling with image deblurring algorithms to obtain high-spatial-resolution remote sensing images. In this work, we use Zernike polynomials to simulate diffraction-blurred small-pixel images under various aberration modulations, ensuring the simulation data follow solid physical principles. Furthermore, we propose a new weighted sparse model ℓwe that combines the Welsch-weighted ℓ1-norm with ℓ0-norm constraints, and further applies ℓwe regularization to both gradient fidelity terms and image gradient terms to enhance fidelity constraints and improve latent structure preservation. Compared with other sparse models, our model produces results with fewer residual structures and stronger sparsity. Comprehensive evaluations on both simulated small-pixel remote sensing datasets and real-world remote sensing images demonstrate that the proposed weighted sparse image quality restoration algorithm achieves more desirable results with excellent robustness. Compared to other methods, the proposed approach improves PSNR by an average of 2.5% and SSIM by 2.2%, while reducing ER by 20.7%. This provides an effective technical solution for image quality restoration of small-pixel remote sensing data.

1. Introduction

High-spatial-resolution remote sensing imaging technology plays a pivotal role in Earth observation systems, with its application scenarios encompassing major fields such as environmental monitoring, ocean exploration, and geographic mapping [1]. However, constrained by the physical limitations of the optical system’s diffraction limit and detector pixel size, traditional high-resolution imaging systems face engineering bottlenecks in the domain of spaceborne remote sensing: while solely relying on optical aperture expansion can enhance spatial resolution, this approach is accompanied by cascading issues including payload volume inflation, exponential increases in launch costs, and reduced system stability [2]. Against this backdrop, small-pixel imaging technology offers a more cost-effective and technically viable pathway for acquiring high-resolution remote sensing imagery, achieved through a breakthrough enhancement in sub-pixel-level spatial sampling rates [3].
The core mechanism of small-pixel imaging lies in compressing the pixel size below the radius of the Airy disk, thereby breaking through the traditional resolution limit by enhancing spatial sampling density. However, the nonlinear degradation effects induced by this technology cannot be overlooked: the Point Spread Function (PSF) of the optical system, operating at the small-pixel scale, will span multiple adjacent pixels, leading to significant spatial blurring effects during the imaging process. This blurring effect increases the difficulty of image interpretation and information extraction, consequently limiting the technology’s potential in applications such as land cover classification [4], target detection [5], and building extraction [6]. Therefore, conducting an in-depth investigation into the blur formation mechanism in small-pixel remote sensing images and seeking effective methods for image quality enhancement hold significant theoretical importance and application value. From a mathematical modeling perspective, the aforementioned imaging degradation [7] process can be modeled as
B = k L + n ,
where B represents the degraded blurred data, L denotes the high-quality data to be restored, k signifies the blur kernel (PSF), n represents additive noise, and denotes the convolution operator. It is particularly important to note that spaceborne remote sensing systems, subject to the coupled effects of factors such as alignment errors, mechanical vibration, and thermally induced deformation, experience changes in optical aberrations, causing the PSF to become time-varying. This makes k in Equation (1) a typical time-varying unknown quantity, thereby transforming the small-pixel data restoration problem into a highly ill-posed [8] blind image deblurring task.
To address the ill-posed nature of the blind deblurring problem, researchers have developed various regularization methods based on natural image priors. Levin et al. [9], through a Bayesian framework, revealed the inherent flaw in traditional Maximum a Posteriori (MAP) approaches, which are prone to converging to trivial solutions, and consequently proposed a marginalization optimization strategy. Krishnan et al. [10] advocated for the use of ℓ1/ℓ2 norm regularization to constrain the sparsity of image gradients. Xu et al. [11] and Pan et al. [12] introduced generalized ℓ0 regularization to construct gradient sparsity models, achieving a significant balance between restoration efficiency and quality. However, gradient sparsity priors are susceptible during iterative optimization to causing the latent image to converge towards the blurry image [13]. Even when combined with edge selection strategies [14,15,16,17], which can locally improve blur kernel estimation accuracy, these methods remain constrained by their reliance on strong edge feature priors, making them prone to inducing secondary issues such as noise amplification and artifact generation. Subsequent research shifted towards statistical feature constraints like the dark channel prior [18], local minimum intensity prior [19], and patch-wise maximum gradient prior [20], yielding notable restoration results. Nevertheless, their effectiveness is limited by the expressiveness of the image’s statistical characteristics. Although recent deep learning techniques have demonstrated potential in blur kernel estimation using Convolutional Neural Networks (CNNs) [21,22] and end-to-end deblurring [23,24], they are still constrained by the complexity of the blur degradation process, the diversity of remote sensing scenarios, and the limitations of available training datasets. Consequently, existing deep learning models continue to face challenges [25,26] in terms of generalization capability, computational resource consumption, and the effectiveness of large-scale blurry image restoration.
Recently, Chen et al. [27], addressing the limitations of traditional noise modeling, modified the noise modeling technique from the literature [28]. They proposed using a weighted sum of a dense norm ℓ2 and a sparse norm ℓe (a combination of ℓ0 and ℓ1) to model unknown noise, and introduced a gradient sparsity enhancement mechanism via the ℓe norm, significantly improving the robustness of blur kernel estimation. Building upon this, Ge et al. [29] further extended the approach by incorporating a second-order gradient residual constraint and an improved gradient sparsity prior mechanism, effectively resolving optimization difficulties in large-scale blur kernel estimation. To address the blind restoration challenge in small-pixel remote sensing data, which arises from complex texture features and intricate noise interference during the blur degradation process, this study further constructs an even sparser model to constrain the gradient residual fitting term, aiming to estimate a more accurate blur kernel. Consequently, we propose an optimized blind deblurring model for small-pixel data based on a Welsch-weighted sparsity strategy. As illustrated in Figure 1, the residual maps generated by our weighted sparse model contain fewer image structures compared to those from other models, and the corresponding results exhibit fewer artifacts, as indicated by the red boxes.
Our main contributions are summarized as follows:
(1)
We propose a small-pixel sampling technique combined with a deblurring algorithm to acquire high-resolution remote sensing images, supported by Zernike polynomial-based simulations of aberration-modulated small-pixel data.
(2)
We introduce a more sparse model (ℓwe), which incorporates a Welsch-weighted ℓ1-norm and ℓ0-norm to constrain gradient fidelity and image gradients, along with an efficient solver under the MAP framework.
(3)
Quantitative metrics and visual quality assessments on both synthetic and real remote sensing data demonstrate that our method outperforms state-of-the-art algorithms in both restoration accuracy and structural preservation, effectively addressing diffraction degradation under aberration modulation.
The remainder of this paper is organized as follows. Section 2 introduces a weighted sparse model for small-pixel data and describes its optimization. Section 3 introduces the experimental comparison with other advanced algorithms. Section 4 provides further analysis and discussion on the effectiveness of our method. Section 5 concludes the paper.

2. Weighted Sparse Model and Optimization for Small-Pixel Data

2.1. Small-Pixel Data PSF Analysis

According to optical theory, a point source imaged through a diffraction-limited optical system forms an Airy disk. The Rayleigh criterion indicates that to achieve the system’s diffraction-limited resolution, a specific matching relationship must exist between the radius of the Airy disk and the detector pixel size—their scales should be approximately comparable. Figure 2a illustrates the sampling result of a diffraction-limited light spot by a normal optical system. In contrast, Figure 2b presents the sampling morphology of the identical diffraction spot when the pixel size is reduced by half.
However, practical optical systems are subject not only to diffraction modulation but also to prevalent aberrations such as defocus, astigmatism, spherical aberration, comatic aberration, and distortion. These aberrations arise from imperfections in optical components themselves, as well as factors including assembly and alignment errors, launch vibrations, and long-term on-orbit environmental variations. Based on Fourier optics principles, the PSF of an optical system can be characterized by the squared modulus of the Fourier transform of the Pupil Function:
PSF = F P ( ρ , θ ) 2 ,
where the pupil function P ( ρ , θ ) describes the complex amplitude transmittance at the pupil plane, encompassing both amplitude and phase information. Under ideal conditions (circular pupil, uniform illumination), its expression is
P ( ρ , θ ) = A ( ρ , θ ) exp ( i k W ( ρ , θ ) ) ,
where A ( ρ , θ ) denotes the amplitude transmittance (typically set to 1), k is the wavenumber, and W ( ρ , θ ) represents the wavefront aberration. This aberration is typically characterized using Zernike polynomials.
W ( ρ , θ ) = j = 1 K α j Z j ( ρ , θ ) .
In the equation, α j represents the Zernike coefficients, and Z j denotes the corresponding orthogonal polynomials. Since the primary aberrations affecting imaging quality are typically dominated by low-order Zernike terms, this study simulates five representative PSF models based on small-pixel imaging Airy disks and Zernike polynomial aberrations, including defocus, spherical aberration, astigmatism, comatic aberration, and their combinations, to validate the accuracy of the PSF estimation method. The simulated PSF models are illustrated in Figure 3.

2.2. Welsch-Weighted Sparse Model

To effectively constrain the latent image, sparse priors are commonly employed as regularization terms to suppress fine details that may adversely affect blur kernel estimation [30,31]. Building on the findings of [32], which demonstrate that the Welsch function effectively removes harmful details while preserving critical edges, we introduce its weighted form to achieve superior sparse constraint performance. The mathematical formulation is given by
ψ ( x ) = e x / a ^ b ,
where parameters a and b are tunable, with their weight functions shown in Figure 4a,b. We assign higher weights to fine details but lower weights to edges. This effectively suppresses fine details while preserving edge structures. The weight functions exhibit a key property: small inputs yield large weights, while large inputs produce small weights. By integrating this weighting mechanism into the sparse model, we construct a weighted sparse regularization term. Figure 4c compares the average gradient distributions of intermediate latent images under different sparse regularizers on the [33] dataset. The proposed Welsch-weighted sparse regularization model demonstrates significantly stronger sparsity.
We further employ Welsch-weighted sparse constraints to construct a regularization model. This regularization term consists of a weighted ℓ1-norm and a sparse ℓ0-norm. For a sparse signal U , its ℓwe-norm is defined as
U w e = ψ ( ) U 1 + U 0 ,
where ψ ( ) represents the weighting function. Given a degraded signal W , the most basic ℓwe-norm minimization problem can be formulated as follows:
min U W U 2 2 + λ ψ ( W ) U 1 + U 0 .
Here, λ is a non-negative penalty parameter. The model in (7) is essentially an element-wise minimization problem and can thus be decomposed into a series of identical subproblems for solution:
min u w u 2 + λ ψ ( w ) u 1 + u 0 ,
where w and u denote the elements of signals W and U at the same position.
Theorem 1. 
The model (8) admits a closed-form solution:
u = w + λ ψ ( w ) / 2 ,   w < λ λ ψ ( w ) / 2 w λ ψ ( w ) / 2 ,   w > λ + λ ψ ( w ) / 2 0 ,             otherwise . .
Proof of Theorem 1. 
Let τ denote the energy function in Equation (8). We rewrite (8) as follows:
τ = min u λ ψ ( w ) u 1 + u 0 + w u 2 .
When u = 0 , the derivative is
τ ( 0 ) = w 2 .
When u 0 , the derivative is
arg min τ ( u ) = min u λ ψ ( w ) u 1 + w u 2 .
Since the weighting function ψ ( w ) can be treated as a constant coefficient, Equation (12) essentially represents a one-dimensional shrinkage operator, expressed as follows:
τ u = λ λ ψ w w λ ψ w / 2 2 , u = w + λ ψ w / 2 λ + λ ψ w w λ ψ w / 2 2 , u = w λ ψ w / 2 .
When u = w + λ ψ ( w ) / 2 , the solution can be directly derived through the τ ( u ) < τ ( 0 ) . Specifically, we obtain w + λ ψ ( w ) / 2 < λ . The case for other values of u = w λ ψ ( w ) / 2 follows analogously. Thus, the above proof demonstrates that Equation (8) admits a closed-form solution. □

2.3. Integrated Model and Optimization

Building upon the aforementioned analysis, this study proposes a novel blind image deblurring model based on Welsch-weighted sparse constraints. Formulated within the traditional MAP framework, its objective function is defined as
min L , k = B L k 2 2 + α B L k w e + β L w e + γ k 2 2 .
In the formulation, α , β and γ are the weight parameters for each term, where the implicit weighting parameters a1 and b1 correspond to the parameters in the second term, while a2 and b2 correspond to those in the third term. The first term (fidelity term) and second term (gradient fidelity term) model complex noise distributions during restoration, enforcing that the convolved output of latent image L and blur kernel k approximates the observed blurred image B . The third term (gradient term) preserves strong gradients and sharp edges while suppressing fine details by biasing solutions toward clear images over blurred ones. The fourth term (blur kernel term) can be efficiently solved via Fast Fourier Transform (FFT).
For the solution of Model (14), we employ an alternating minimization approach to iteratively estimate the latent sharp image:
min L = B L k 2 2 + α B L k w e + β L w e ,
and the blur kernel:
min k = B L k 2 2 + α B L k w e + γ k 2 2 .

2.3.1. Latent Image Estimation

Due to the constraints imposed by the weighted sparse ℓwe-norm on both the gradient residual term and the gradient term, directly minimizing Equation (15) is highly challenging. To address this, we employ the half-quadratic splitting (HQS) strategy, introducing auxiliary variables t and g to replace B L k and k , respectively. Consequently, Equation (15) can be reformulated as
min t , g , L = B L k 2 2 + α t w e + β g w e + ς B L k t 2 2 + η L g 2 2 ,
where ς and η denote penalty parameters. This model can be solved by alternately updating variables t , g , and L through an iterative optimization process.
Given image L , the subproblem for solving t is formulated as follows:
min t = ς B L k t 2 2 + α t w e .
This constitutes a weighted ℓwe-norm minimization problem that admits a closed-form solution:
t = R E + α 2 ς ψ ( R E ) ,   R E < α ς α 2 ς ψ ( R E )   R E α 2 ς ψ ( R E ) ,   R E > α ς + α 2 ς ψ ( R E ) 0 ,           otherwise . .
For notational convenience, we employ R E to represent the residual image B L k , while ψ ( x ) denotes the weighting function.
Given image L , the subproblem for solving g is formulated as follows:
min g = η L g 2 2 + β g w e .
The solution for g follows a similar form to Equation (19) and likewise admits a closed-form solution.
After solving the subproblems for t and g , the model for estimating the latent image L becomes
min L = B L k 2 2 + ς B L k t 2 2 + η L g 2 2 .
Through FFT, the solution to Equation (21) is given by
L = F 1 F k ¯ F B + ς F F k ¯ F B t + η F ¯ F g F k ¯ F k + ς F F k ¯ F F k + η F ¯ F ,
where F ( ) and F 1 ( ) represent the forward and inverse Fourier transform operators, respectively.
Algorithm 1 summarizes the main steps for estimating the latent image L as follows:
Algorithm 1: Latent Image L Estimation
Input: Blurred image B , initialized k(0), parameters α , β .
L B ,   ς α ,   η β
repeat
         Update t and g using Equations (18)–(20).
         Update L using Equation (22).
          ς 2.2 ς ,   η 2.2 η .
until  η > η m a x
Output: Final latent image L .

2.3.2. Blur Kernel Estimation

Building upon valuable insights from prior research [9], we optimize the blur kernel in the gradient domain by reformulating Equation (16) as follows:
min k = B L k 2 2 + α B L k w e + γ k 2 2 .
Similarly, we introduce an auxiliary variable t to replace B L k in the above equation, allowing the model to be rewritten as
min t , k = B L k 2 2 + α t w e + ς B L k t 2 2 + γ k 2 2 ,
where ς is a non-negative penalty parameter. After solving the t subproblem in Equation (25), the blur kernel model is given by Equation (26):
min t = ς B L k t 2 2 + α t w e ,
min k = B L k 2 2 + ς B L k t 2 2 + γ k 2 2 .
Here, Equation (25) can be solved analogously to Equation (18), while Equation (26) admits an efficient FFT-based solution:
k = F 1 F L ¯ F B + ς F L ¯ F B t 1 + ς F L ¯ F L + γ .
Algorithm 2 summarizes the key steps for estimating blur kernel k:
Algorithm 2: Blur Kernel k Estimation
Input: Blurred image B , parameters α , γ .
Initialize k from the previous pyramid level.
while i m a x _ i t e r  do
    Estimate L using Algorithm 1.
    for t = 1 to 5 do
          Update t and Estimate k using Equations (25) and (27).
           ς 2 ς .
    end for
end while
Output: Estimated blur kernel k ^ .
Consistent with other advanced deblurring methods, the blur kernel estimation is performed on a multi-scale image pyramid using a coarse-to-fine strategy [12]. After obtaining the blur kernel k, its negative elements are set to zero and normalized to satisfy the definition of a blur kernel.

2.3.3. Final Image Restoration

Given that the core objective of this study is to restore high-quality images through accurate estimation of the blur kernel (PSF), various non-blind deconvolution methods can be employed to reconstruct the final sharp image once the blur kernel is determined. Therefore, unless otherwise specified, this paper uniformly adopts the non-blind sparse deconvolution algorithm [18] for final image restoration.

3. Experimental Results

The hyperparameters of our model were set as follows: α = β = 0.004 , γ = 2 , a 1 = 1.2 , b 1 = 6 , a 2 = 0.8 , b 2 = 8 . The algorithm was implemented in the MATLAB 2023a environment and evaluated on a computer equipped with an AMD Ryzen 7 5800H CPU and 16 GB RAM. To comprehensively assess the proposed method, experiments were conducted on both synthetically degraded remote sensing images from the AID dataset [34] and real-world degraded satellite images. The proposed method was compared with eight state-of-the-art algorithms: Krishnan et al. [10], Pan et al. [18], Wen et al. [19], Xu et al. [20], Pan et al. [16], Dong et al. [17], Chen et al. [27], and Ge et al. [29], with References [16,17] specifically incorporating saturated pixel considerations. For experiments with synthetic images, full-reference image quality metrics were employed, including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) [35], and Error Ratio (ER) [13]. In real remote sensing image processing experiments, no-reference evaluation metrics were used, including Image Entropy (E), Gray Mean Gradient (GMG), and Laplacian Sum (LS).

3.1. Small-Pixel Data Simulation

To construct the small-pixel remote sensing dataset, this study selected eight scene images covering different typical ground features (as shown in Figure 5: (a) airport, (b) bridge, (c) forest, (d) desert, (e) dense residential, (f) industrial zone, (g) storage tanks, (h) farmland). These were convolved with the five simulated Zernike aberration blur kernels mentioned earlier, ultimately generating the corresponding simulated dataset.

3.2. Simulated Remote Sensing Image Processing Experiment

First, we evaluated the proposed method on simulated small-pixel remote sensing data (comprising eight images with five blur kernels). To ensure a fair comparison, all methods were configured with a blur kernel size of 29 × 29 and employed the same non-blind deconvolution algorithm to obtain the final sharpened images. As shown in Table 1, the proposed method achieved the lowest average ER, the highest average PSNR, and the highest average SSIM. Additionally, the success rate of the proposed method in maintaining an ER below 5 reached 97.5%.
Figure 6, Figure 7 and Figure 8 present visual comparisons of selected processing results. By examining the magnified red-box regions—such as airplanes, parking lots, cars, ships, and buildings—it can be observed that, compared to other methods, the proposed approach produces restored images with sharper edges, significantly reduced ringing artifacts, and better recovered fine details.
The superior performance demonstrated across various remote sensing scenarios in both subjective and objective evaluations visually confirms the proposed method’s advantages in PSF estimation accuracy and the resulting enhanced restoration effects, establishing its practical value for remote sensing image quality enhancement.
Next, we analyzed the PSNR and SSIM results of the recovered images under different scenarios, as presented in Table 2, which indicate that the algorithm’s performance is correlated with the features and textures in various land cover types. According to prior experience, scenes with sufficient features and relatively simple textures generally achieve higher reconstruction accuracy. From the quantitative results, the airport, bridge, and farmland scenes exhibit the best restoration performance, as their image structures contain more distinctive features with fewer fine textures. The dense residential and industrial scenes also achieve satisfactory results, though their textures are slightly more complex. The forest scene, characterized by homogeneous backgrounds and low-texture gray values, demonstrates reasonably good reconstruction metrics. In contrast, the desert and storage tank scenes, which contain intricate textures and highly detailed structures, show relatively weaker algorithm performance with lower quantitative metrics.

3.3. Real Remote Sensing Image Processing Experiments

To further validate the proposed method, we conducted evaluations on real-world satellite remote sensing images with degradation. For fair comparison, all methods were constrained to use a fixed blur kernel size of 29 × 29. As shown in Table 3, in terms of no-reference evaluation metrics, the proposed method achieved nearly the highest scores in E, GMG, and LS, outperforming all other methods except Krishnan et al. [10] and Pan [18], where the GMG and LS metric values were scaled by 10−3.
Figure 9, Figure 10, Figure 11 and Figure 12 present visual comparisons of the restoration results. Observing the red-box regions in Figure 9 and Figure 10, it is evident that our method effectively restored brighter building and vehicle areas, while other comparative methods generally produced varying degrees of oscillatory artifacts with inferior subjective visual quality. In the red-box region of Figure 11, Figure 11b,c exhibit significant distortion, while Figure 11d–g demonstrate some noise-like bright-dark patterns. Although Figure 11h–j show comparable subjective visual quality, our method yields superior objective evaluation metrics. Notably, for scenes with prominent features but weak background textures (e.g., Figure 12), all methods demonstrate satisfactory subjective visual performance. Nevertheless, our method still maintains advantages in objective metrics (E, GMG, LS).
This comprehensive evaluation confirms that our approach not only improves objective image quality metrics but also delivers more natural and visually pleasing restoration results for real-world remote sensing applications.

4. Discussion

In this section, we first conduct ablation studies for comparative analysis, then validate the effectiveness of the proposed weighted sparse constraint model, and analyze the algorithm’s hyperparameter sensitivity, convergence, and computational complexity. To ensure comparability and reliability, the ablation experiments are performed on simulated small-pixel remote sensing data, while other experiments are conducted on the Levin dataset [13], which includes four images and their corresponding eight different blur kernels. The comparison algorithms remain the same as those used in the previous section.
For quantitative performance assessment, we employed four key metrics: the ER for overall restoration accuracy, PSNR for noise characteristics evaluation, SSIM for perceptual quality measurement, and Kernel Similarity [36] specifically designed to assess blur kernel estimation precision.

4.1. Ablation Study

We first conduct an ablation study to analyze the effectiveness of the ℓwe residual term and ℓwe gradient term. Specifically, four cases are compared: ℓ0 + ℓ0, ℓ0 + ℓwe, ℓwe + ℓ0, and our proposed ℓwe + ℓwe model. The performance evaluation on small-pixel remote sensing data, as presented in Table 4, demonstrates the superiority of the ℓwe + ℓwe model.

4.2. Model Effectiveness Analysis

Although the proposed image restoration algorithm theoretically demonstrates effective deblurring capability, its practical performance requires quantitative evaluation. As shown in Figure 13a, comparing the proposed method with other methods clearly demonstrates its superior performance in terms of cumulative error rate. Further validation is provided in Figure 14a,b, where our method achieves the highest average PSNR and average SSIM values among all compared algorithms, confirming its advantages in structural fidelity.

4.3. Convergence Analysis and Computational Efficiency

To validate the overall convergence characteristics of our algorithm, we systematically monitored the evolution of both the average energy of the objective function (14) and the average kernel similarity across iterations at the optimal scale of the Levin dataset. As illustrated in Figure 13b,c, the proposed method demonstrates excellent stability and convergence properties, with the average energy of the objective function stabilizing after approximately 19 iterations, while the average kernel similarity achieves convergence within about 35 iterations.
Additionally, the average runtime of different algorithms was tested across varying image sizes to evaluate computational efficiency. As shown in Table 5, the runtime of the proposed method is on the same order of magnitude as current state-of-the-art efficient algorithms (e.g., Xu et al. [20], Chen et al. [27]), while significantly outperforming more time-consuming comparative methods (e.g., Pan et al. [16], Dong et al. [17]). Combined with the aforementioned superior restoration performance analysis, these results demonstrate that the proposed method maintains high computational efficiency while achieving highly competitive processing outcomes.

4.4. Key Parameters Analysis

The proposed objective function (14) incorporates seven primary hyperparameters: α , β , γ , a 1 , a 2 , b 1 and b 2 . To investigate their individual impacts on algorithm performance, we conducted controlled tests using the single-variable method, with kernel similarity between estimated and ground-truth blur kernels as the evaluation metric.
Figure 15 presents the influence of each hyperparameter’s variation within reasonable ranges on kernel similarity.
The analysis results demonstrate that within the tested hyperparameter variation range, the proposed method exhibits relatively small fluctuations in estimated blur kernel quality, indicating strong stability.

5. Conclusions

This study employs Zernike polynomials to simulate small-pixel diffraction data modulated by various aberrations, and proposes a weighted sparse image quality restoration algorithm for small-pixel remote sensing data within the MAP framework to obtain high-quality, high-resolution remote sensing data. The core of this algorithm lies in constructing a weighted sparse model utilizing the nonlinear attenuation characteristics of the Welsch weighting function. This model effectively suppresses complex noise during the restoration process by constraining the gradient residual fitting term, while simultaneously enhancing restoration accuracy by penalizing non-significant details through image gradient constraints. The algorithm adopts a projection-based iterative strategy, combining the half-quadratic splitting method and FFT for efficient solution, thereby obtaining the estimated PSF and restoring remote sensing image quality.
Experiments on simulated small-pixel remote sensing datasets demonstrate that the proposed algorithm outperforms comparative methods in both quantitative evaluation metrics and subjective visual quality assessment, verifying its effectiveness. Test results on real remote sensing imagery further confirm that the algorithm exhibits significant advantages in comprehensive subjective visual effects and objective evaluation indicators. Additionally, the algorithm demonstrates low sensitivity to hyperparameters, with good robustness, stable convergence, and high computational efficiency. This algorithm is believed to provide a reference value for the implementation of small-pixel imaging technology.
Furthermore, it is imperative to conduct in-depth investigations into the signal-to-noise ratio (SNR) characteristics of small-pixel imaging systems, with particular emphasis on the algorithm’s restoration capability under complex noise conditions. Future research should prioritize the integration of deep learning techniques into the existing framework. Such integration would effectively harness the complementary strengths of both paradigms, enabling enhanced generalization capability and improved computational efficiency while maintaining superior restoration performance. Ultimately, these advancements are expected to yield more robust and resource-efficient solutions for small-pixel remote sensing image restoration.

Author Contributions

Conceptualization, C.Y., C.L. and M.B.; methodology, C.Y. and C.L.; writing—original draft preparation, C.Y. and C.L.; writing—review and editing, C.Y., C.L., M.B., Y.Z., Y.M. and S.L.; funding acquisition, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Major Projects of the Ministry of Science and Technology under Grant 2023YFB3906302.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, D.; Wang, M.; Jiang, J. China’s high-resolution optical remote sensing satellites and their mapping applications. Geo-Spat. Inf. Sci. 2021, 24, 85–94. [Google Scholar] [CrossRef]
  2. Metwally, M.; Bazan, T.M.; Eltohamy, F. Design of very high-resolution satellite telescopes part I: Optical system design. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 1202–1208. [Google Scholar] [CrossRef]
  3. Ge, J.J.; Wu, Y.C.; Chen, H. Research progress on high-resolution imaging system for optical remote sensing in aerospace. Chin. Opt. 2023, 16, 258–282. [Google Scholar] [CrossRef]
  4. Xu, Z.; Su, C.; Zhang, X. A semantic segmentation method with category boundary for Land Use and Land Cover (LULC) mapping of Very-High Resolution (VHR) remote sensing image. Int. J. Remote Sens. 2021, 42, 3146–3165. [Google Scholar] [CrossRef]
  5. Han, W.; Chen, J.; Wang, L.; Feng, R.; Li, F.; Wu, L.; Tian, T.; Yan, J. Methods for small, weak object detection in optical high-resolution remote sensing images: A survey of advances and challenges. IEEE Geosci. Remote Sens. Mag. 2021, 9, 8–34. [Google Scholar] [CrossRef]
  6. Wang, Y.; Gu, L.; Li, X.; Ren, R. Building extraction in multitemporal high-resolution remote sensing imagery using a multifeature LSTM network. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1645–1649. [Google Scholar] [CrossRef]
  7. Chan, T.F.; Wong, C.K. Total variation blind deconvolution. IEEE Trans. Image Process. 1998, 7, 370–375. [Google Scholar] [CrossRef]
  8. Yang, L.; Ren, J. Remote sensing image restoration using estimated point spread function. In Proceedings of the 2010 International Conference on Information, Networking and Automation (ICINA), Kunming, China, 18–19 October 2010; Volume 1, pp. V1-48–V1-52. [Google Scholar]
  9. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Efficient marginal likelihood optimization in blind deconvolution. In Proceedings of the CVPR, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2657–2664. [Google Scholar]
  10. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 233–240. [Google Scholar]
  11. Xu, L.; Zheng, S.; Jia, J. Unnatural l0 sparse representation for natural image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1107–1114. [Google Scholar]
  12. Pan, J.; Hu, Z.; Su, Z.; Yang, M.-H. L0-Regularized Intensity and Gradient Prior for Deblurring Text Images and Beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 342–355. [Google Scholar] [CrossRef]
  13. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar]
  14. Joshi, N.; Szeliski, R.; Kriegman, D.J. PSF estimation using sharp edge prediction. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  15. Xu, L.; Jia, J. Two-phase kernel estimation for robust motion deblurring. In Proceedings of the Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 157–170. [Google Scholar]
  16. Pan, J.; Lin, Z.; Su, Z.; Yang, M.-H. Robust kernel estimation with outliers handling for image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2800–2808. [Google Scholar]
  17. Dong, J.; Pan, J.; Su, Z.; Yang, M.-H. Blind image deblurring with outlier handling. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2478–2486. [Google Scholar]
  18. Pan, J.; Sun, D.; Pfister, H.; Yang, M.-H. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1628–1636. [Google Scholar]
  19. Wen, F.; Ying, R.; Liu, Y.; Liu, P.; Truong, T.-K. A simple local minimal intensity prior and an improved algorithm for blind image deblurring. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2923–2937. [Google Scholar] [CrossRef]
  20. Xu, Z.; Chen, H.; Li, Z. Fast blind deconvolution using a deeper sparse patch-wise maximum gradient prior. Signal Process. Image Commun. 2021, 90, 116050. [Google Scholar] [CrossRef]
  21. Tao, X.; Gao, H.; Shen, X.; Wang, J.; Jia, J. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8174–8182. [Google Scholar]
  22. Cho, S.; Ji, S.; Hong, J.; Jung, S.; Ko, S. Rethinking coarse-to-fine approach in single image deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 4621–4630. [Google Scholar]
  23. Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. Deblurgan: Blind motion deblurring using conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8183–8192. [Google Scholar]
  24. Zhang, D.; Tang, N.; Qu, Y. Joint motion deblurring and super-resolution for single image using diffusion model and gan. IEEE Signal Process. Lett. 2024, 31, 736–740. [Google Scholar] [CrossRef]
  25. Jiang, W.; Sun, Y.; Lei, L.; Kuang, G.; Ji, K. Change detection of multisource remote sensing images: A review. Int. J. Digit. Earth 2024, 17, 2398051. [Google Scholar] [CrossRef]
  26. Archana, R.; Jeevaraj, P.S.E. Deep learning models for digital image processing: A review. Artif. Intell. Rev. 2024, 57, 11. [Google Scholar] [CrossRef]
  27. Chen, L.; Fang, F.; Lei, S.; Li, F.; Zhang, G. Enhanced sparse model for blind deblurring. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Cham, Switzerland, 2020; pp. 631–646. [Google Scholar]
  28. Gong, Z.; Shen, Z.; Toh, K.C. Image restoration with mixed or unknown noises. Multiscale Model. Simul. 2014, 12, 458–487. [Google Scholar] [CrossRef]
  29. Ge, X.; Liu, J.; Hu, D.; Tan, J. An extended sparse model for blind image deblurring. Signal Image Video Process. 2024, 18, 1863–1877. [Google Scholar] [CrossRef]
  30. Liu, J.; Yan, M.; Zeng, T. Surface-Aware Blind Image Deblurring. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 1041–1055. [Google Scholar] [CrossRef]
  31. Li, L.; Pan, J.; Lai, W.S.; Gao, C.; Sang, N.; Yang, M.-H. Blind Image Deblurring via Deep Discriminative Priors. Int. J. Comput. Vis. 2019, 127, 1025–1043. [Google Scholar] [CrossRef]
  32. Xu, Z.; Lai, J.; Zhou, J.; Chen, H.; Huang, H.; Li, Z. Image deblurring using a robust loss function. Circuits Syst. Signal Process. 2022, 41, 1704–1734. [Google Scholar] [CrossRef]
  33. Köhler, R.; Hirsch, M.; Mohler, B.; Schölkopf, B.; Harmeling, S. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In Proceedings of the Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 27–40. [Google Scholar]
  34. Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
  35. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  36. Hu, Z.; Yang, M.H. Learning Good Regions to Deblur Images. Int. J. Comput. Vis. 2015, 115, 345–362. [Google Scholar] [CrossRef]
Figure 1. Comparative analysis of noise modeling strategies. The first and second rows display the processing results from different models along with their corresponding residual maps. (a) Blurry input; (b) Chen et al. [27]; (c) Ge et al. [29]; (d) Ours.
Figure 1. Comparative analysis of noise modeling strategies. The first and second rows display the processing results from different models along with their corresponding residual maps. (a) Blurry input; (b) Chen et al. [27]; (c) Ge et al. [29]; (d) Ours.
Remotesensing 17 02979 g001
Figure 2. Comparative visualization of Airy disk patterns: (a) Diffraction spot under normal sampling; (b) diffraction spot under small-pixel sampling.
Figure 2. Comparative visualization of Airy disk patterns: (a) Diffraction spot under normal sampling; (b) diffraction spot under small-pixel sampling.
Remotesensing 17 02979 g002
Figure 3. Five point spread functions simulated using Zernike polynomials.
Figure 3. Five point spread functions simulated using Zernike polynomials.
Remotesensing 17 02979 g003
Figure 4. Comparison of Welsch weighting functions and their sparse regularization performance: (a) Comparison under different a values; (b) comparison under different b values; (c) average gradient distribution of intermediate latent images obtained from different sparse regularizers.
Figure 4. Comparison of Welsch weighting functions and their sparse regularization performance: (a) Comparison under different a values; (b) comparison under different b values; (c) average gradient distribution of intermediate latent images obtained from different sparse regularizers.
Remotesensing 17 02979 g004
Figure 5. Simulated small-pixel data samples: (a) airport; (b) bridge; (c) forest; (d) desert; (e) dense residential; (f) industrial; (g) storage tanks; (h) farmland.
Figure 5. Simulated small-pixel data samples: (a) airport; (b) bridge; (c) forest; (d) desert; (e) dense residential; (f) industrial; (g) storage tanks; (h) farmland.
Remotesensing 17 02979 g005
Figure 6. Visual comparison of different methods on the remote sensing dataset [10,16,17,18,19,20,27,29].
Figure 6. Visual comparison of different methods on the remote sensing dataset [10,16,17,18,19,20,27,29].
Remotesensing 17 02979 g006
Figure 7. Visual comparison of different methods on the remote sensing dataset [10,16,17,18,19,20,27,29].
Figure 7. Visual comparison of different methods on the remote sensing dataset [10,16,17,18,19,20,27,29].
Remotesensing 17 02979 g007
Figure 8. Visual comparison of different methods on the remote sensing dataset [10,16,17,18,19,20,27,29].
Figure 8. Visual comparison of different methods on the remote sensing dataset [10,16,17,18,19,20,27,29].
Remotesensing 17 02979 g008
Figure 9. Visual comparison of different methods on real remote sensing data [10,16,17,18,19,20,27,29].
Figure 9. Visual comparison of different methods on real remote sensing data [10,16,17,18,19,20,27,29].
Remotesensing 17 02979 g009
Figure 10. Visual comparison of different methods on real remote sensing data [10,16,17,18,19,20,27,29].
Figure 10. Visual comparison of different methods on real remote sensing data [10,16,17,18,19,20,27,29].
Remotesensing 17 02979 g010
Figure 11. Visual comparison of different methods on real remote sensing data [10,16,17,18,19,20,27,29].
Figure 11. Visual comparison of different methods on real remote sensing data [10,16,17,18,19,20,27,29].
Remotesensing 17 02979 g011
Figure 12. Visual comparison of different methods on real remote sensing data [10,16,17,18,19,20,27,29].
Figure 12. Visual comparison of different methods on real remote sensing data [10,16,17,18,19,20,27,29].
Remotesensing 17 02979 g012
Figure 13. Quantitative evaluation on Levin dataset and convergence analysis [10,16,17,18,19,20,27,29].
Figure 13. Quantitative evaluation on Levin dataset and convergence analysis [10,16,17,18,19,20,27,29].
Remotesensing 17 02979 g013
Figure 14. Quantitative evaluation results on Levin dataset [10,16,17,18,19,20,27,29].
Figure 14. Quantitative evaluation results on Levin dataset [10,16,17,18,19,20,27,29].
Remotesensing 17 02979 g014
Figure 15. Hyperparameter sensitivity analysis of the proposed algorithm.
Figure 15. Hyperparameter sensitivity analysis of the proposed algorithm.
Remotesensing 17 02979 g015
Table 1. Processing results of different methods on simulated images from the AID dataset.
Table 1. Processing results of different methods on simulated images from the AID dataset.
PSNRSSIMERSUCCESS (ER < 5)
Krishnan et al. [10]21.590.3112608.690.00%
Pan et al. [18]25.310.576553.927.50%
Wen et al. [19]30.580.89613.4992.5%
Xu et al. [20]30.250.89903.4792.5%
Pan et al. [16]30.660.89843.7182.5%
Dong et al. [17]29.960.87344.8770.0%
Chen et al. [25]30.480.89863.1390.0%
Ge et al. [27]30.270.89623.2590.0%
Our31.450.91782.4897.5%
Table 2. The restoration results across different scenarios.
Table 2. The restoration results across different scenarios.
AirportBridgeForestDesertDense ResidentialIndustrialStorage TanksFarmland
PSNR34.6935.2530.7827.7832.9432.6426.9537.85
SSIM0.95710.95920.92210.86170.94220.94030.86820.9596
Table 3. Processing results of different methods on real degraded remote sensing images.
Table 3. Processing results of different methods on real degraded remote sensing images.
(b)(c)(d)(e)(f)(g)(h)(i)(j)
Figure 9E14.0813.7813.7913.7813.7113.7413.7813.7813.81
GMG14.509.6309.8209.1837.7787.9769.2779.31911.23
LS6.1544.2644.2853.9863.2863.3204.0404.0556.020
Figure 10E14.2414.2814.2614.2714.2514.2314.2814.2914.29
GMG17.4333.1819.1719.6718.7116.3720.2720.9521.37
LS12.1722.7913.9714.2413.5911.8114.4014.6614.64
Figure 11E15.0314.7514.8414.8514.8314.8214.8614.8614.84
GMG42.0657.7233.6633.6931.5729.8933.8133.4034.03
LS29.6842.4724.3224.0323.7922.4324.6124.4724.64
Figure 12E14.3914.2114.2214.2214.2114.2114.2314.2314.24
GMG9.1246.4086.5666.5896.3626.2676.706.7137.004
LS2.9512.3202.4002.4152.3392.1602.4842.4902.599
Table 4. Ablation study on different residual and gradient models.
Table 4. Ablation study on different residual and gradient models.
0 + ℓ00 + ℓwewe + ℓ0we + ℓwe
PSNR30.0830.2830.4531.45
SSIM0.89340.89690.89680.9178
Table 5. Comparison of runtime among different methods (unit: s).
Table 5. Comparison of runtime among different methods (unit: s).
255 × 255600 × 6001000 × 1000
Krishnan et al. [10]28.69133.84319.05
Pan et al. [18]108.55550.121495.83
Wen et al. [19]9.9824.7461.25
Xu et al. [20]4.4121.2760.10
Pan et al. [16]99.81338.65865.89
Dong et al. [17]125.85352.44894.71
Chen et al. [27]4.6425.8768.88
Ge et al. [29]8.8744.80131.27
Our4.6826.0269.79
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, C.; Liu, C.; Bai, M.; Zhao, Y.; Ma, Y.; Liu, S. Weighted Sparse Image Quality Restoration Algorithm for Small-Pixel High-Resolution Remote Sensing Data. Remote Sens. 2025, 17, 2979. https://doi.org/10.3390/rs17172979

AMA Style

Yang C, Liu C, Bai M, Zhao Y, Ma Y, Liu S. Weighted Sparse Image Quality Restoration Algorithm for Small-Pixel High-Resolution Remote Sensing Data. Remote Sensing. 2025; 17(17):2979. https://doi.org/10.3390/rs17172979

Chicago/Turabian Style

Yang, Chenglong, Chunyu Liu, Menghan Bai, Yingming Zhao, Yunhan Ma, and Shuai Liu. 2025. "Weighted Sparse Image Quality Restoration Algorithm for Small-Pixel High-Resolution Remote Sensing Data" Remote Sensing 17, no. 17: 2979. https://doi.org/10.3390/rs17172979

APA Style

Yang, C., Liu, C., Bai, M., Zhao, Y., Ma, Y., & Liu, S. (2025). Weighted Sparse Image Quality Restoration Algorithm for Small-Pixel High-Resolution Remote Sensing Data. Remote Sensing, 17(17), 2979. https://doi.org/10.3390/rs17172979

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop