Next Article in Journal
Addressing Demographic Bias in Age Estimation Models through Optimized Dataset Composition
Previous Article in Journal
The Role of the Table of Games in the Discrete Thermostatted Kinetic Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Second-Order Continuous-Time Dynamical System for Solving Sparse Image Restoration Problems

School of Management Science, Qufu Normal University, Rizhao 276800, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(15), 2360; https://doi.org/10.3390/math12152360 (registering DOI)
Submission received: 12 June 2024 / Revised: 18 July 2024 / Accepted: 18 July 2024 / Published: 28 July 2024

Abstract

:
The quality of images captured digitally or transmitted over networks is distorted by noise during the process. The current methods of image restoration can be ineffective in dealing with intricate noise patterns or may be slow or imprecise. This paper fills this gap by presenting a new second-order continuous-time dynamical system for denoising of images in image restoration. The approach used in this work poses the problem as a convex quadratic program that can, thus, be solved for optimality. The existence and uniqueness of a global solution are theoretically demonstrated, and the condition for the global strong convergence of the system’s trajectory is provided. The method presented in this paper is shown to be useful in a number of experiments on image restoration. As for the performance, it is higher than that of other known algorithms, with an average SNR equal to 34.78 and a Structural Similarity Index Measure (SSIM) of 0.959 for the reconstructed images. Such improvements demonstrate the effectiveness of the second-order dynamical system approach in actual image restoration applications.
MSC:
90C20; 94A08; 94A12; 68U10

1. Introduction

Digital images are widely used in different areas, such as medical imaging, remote sensing, etc. Nevertheless, noise that is added during acquisition or transmission can greatly reduce their quality and complicate further analysis tasks. Previously used methods for image restoration are based on difficult optimization processes or can fail when dealing with some sorts of noise. Noise reduction is a hot research topic in image processing and pattern recognition [1]. This technique has been widely applied in face recognition [2,3], human tracking [4], and underwater acoustic imaging [5].
In general, the noise reduction problem can be described by the following equation [6]:
min x 0 s . t . A x = b ,
where A denotes the full-row rank matrix, b denotes the observation image, x denotes blurry images, and · 0 denotes the l 0 norm of a matrix. Due to the NP hardness of the problem [7], it is usually relaxed as the following equation:
min x 1 s . t . A x = b ,
where · 1 denotes the l 1 norm of a matrix. According to [8,9], (2) and (1) have the same solution.
In practice, the observation image ( b ) is often polluted by some noise. If b is polluted by noise, Equation (2) can usually be relaxed to
min x 1 s . t . A x b 2 < ϵ ,
where · 2 denotes the l 2 norm of a matrix, and ϵ > 0 is tolerable noise. Adopting the Tikhonov regularization technique, i.e., associating the constraint with a positive multiplier ( τ ), we can obtain the Lagrangian form as follows [10]:
min 1 2 b A x 2 2 + τ x 1 ,
where τ > 0 . (4) is convex but not differentiable, so there is no analytical solution to the problem, and solving it is a computational problem [11].
At present, the problem of noise reduction is receiving more and more attention, especially in image restoration and computer vision related to big data. Many different optimization methods have been developed in recent decades for different noise reduction problems [12]. The greedy tracking method has become a more mainstream method by modifying the coefficient to make substantial improvements in approximation [13]. When there are fewer non-zero elements, the method runs more efficiently. However, sparsity cannot be satisfied. Therefore, this approach is inefficient when applied to large-scale problems.
Other noise reduction problems are solved by solving convex relaxation optimization problems [14]. Existing convex relaxation algorithms include the proximal point algorithm [15], fixed-point algorithm [16], thresholding method [17], and gradient projection algorithm [1]. Because in many applications, the dimension of b is much smaller than that of x [18], some (Lagrangian) dual-type methods have been developed [19,20] which makes the methods more efficient, as the scale of the problem is significantly reduced.
Artificial neural networks are characterized by distributed information storage and massively parallel processing, so they can solve fast optimization problems more efficiently. Therefore, using neural networks to solve Equation (4) is a very feasible approach. In recent years, some neurodynamics-based approaches such as discrete-time recurrent neural networks [21], discrete-time projection neural networks [22], and projection neural networks [23,24,25] have been developed for (4).
For accelerating algorithms, many studies have proposed second-order dynamical systems for monotone inclusion problems [26,27,28,29,30]. Bot et al. [30] established the strong convergence of a trajectory under mild conditions. They also proved that second-order dynamical systems perform better than the first-order method in numerical examples.
Zheng Ding and colleagues [31] proposed a novel approach to image restoration. They leveraged the inherent generative power of denoising diffusion models, adapting a pretrained model for restoration. The key insight lies in constraining the generative space by fine tuning with anchor images that capture the input image’s characteristics. Yang et al. [32] introduced the Dynamic Kernel Prior (DKP) model. Unlike traditional supervised approaches, DKP learns adaptive kernel priors for real-time kernel estimation, enhancing high-resolution image restoration. This unsupervised and pretraining-free method demonstrates superior performance, bridging the gap in blind super-resolution.
In this article, we demonstrate the convergence of the trajectory by the second-order dynamic system (6) under some mild conditions. Furthermore, numerical image restoration experiments are reported to show the superiority of the proposed method.
As opposed to other techniques based on iterative optimization procedures, our method has several advantages. First, the dynamical system framework converges quicker to the optimal solution and is, therefore, suitable for real-time applications. Secondly, the fact that the problem is formulated as a convex quadratic program helps in achieving efficient optimization, in addition to avoiding the algorithm getting stuck at local optima. Moreover, the proposed method is theoretically well founded to ensure the existence and uniqueness of the global solution and mild conditions for the global strong convergence of the system trajectory. These theoretical guarantees provide assurance of the method’s performance and its convergence.
The rest of this article is organized as follows. In Section 2, we present related definitions and lemmas. In Section 3, we propose a second-order continuous-time dynamical system for solving the quadratic program problem. We also prove the global strong convergence of the trajectory. In Section 4, experiments show that the proposed algorithm achieves superior performance over other algorithms for image restoration. In Section 5, we present our conclusions.

2. Preliminaries

In this section, to pave the way for subsequent proofs, we provide some relevant definitions and lemmas.
Definition 1.
If ϕ : R n R n , θ , μ R n , β > 0
β ϕ ( θ ) ϕ ( μ ) 2 θ μ , ϕ ( θ ) ϕ ( μ ) ,
then ϕ is β-cocoercive.
Definition 2.
For an arbitrary set-valued operator ( B : R n R n ), if
G r B = { ( x , u ) R n × R n : u B x } ,
then G r B is a graph of arbitrary operator B. Further, we have
Z e r B = { x R n : 0 B x } .
For subsequent proofs, the following lemma is needed.
Lemma 1
([30]). If 1 c < + , 1 r < + , H : [ 0 , + ) [ 0 , + ) is locally absolutely continuous, H L c ( [ 0 , + ) ) , G : [ 0 , + ) R n , G L r ( [ 0 , + ) ) , x [ 0 , + )
d d x H ( x ) G ( x ) ,
then lim x + H ( x ) = 0 .
Lemma 2
([30]). Let B : R n R n be an L-Lipschitz continuous operator and τ , α L l o c 1 ( [ 0 , + ) ) (that is, τ , α L 1 ( [ 0 , b ] ) for every 0 < b < + ). For u 0 , v 0 H , there exists a unique strong global solution of the dynamical system.

3. Second-Order Dynamical System and Convergence

In this part, we establish a second-order dynamical system to solve the quadratic program problem. We also demonstrate the convergence of the system trajectories (6).
Motivated by [11,30], we transform (4) as a differentiable convex quadratic program.
min μ 0 H ( μ ) = 1 2 μ B μ + c μ .
To solve (5), we structure the second-order continuous-time dynamical system based on a neural network as
μ ¨ ( θ ) + α ( θ ) μ ˙ ( θ ) + τ ( θ ) ( μ ( θ ) P Ω ( μ γ H ( μ ) ) ) = 0 , μ ˙ ( θ 0 ) = μ ˙ 0 Ω , μ ( θ 0 ) = μ 0 Ω ,
where Ω is the positive orthant; μ Ω . γ is a given positive diagonal matrix, i.e., γ = d i a g ( d 1 , , d 2 n ) , d i > 0 ( i = 1 , , 2 n ) ; and P Ω ( μ ) is the projection of μ on Ω , i.e., P Ω ( μ ) = ( P Ω ( μ 1 ) , , P Ω ( μ 2 n ) ) , P Ω ( μ i ) = ( μ i ) + .
Before studying of the convergence properties, we first prove the uniqueness of the existence of solutions for second-order continuous-time dynamical systems.
Theorem 1.
Let μ ( θ ) + P Ω ( μ γ H ( μ ) ) : R n R n be an L-Lipschitz continuous operator and τ L l o c 1 ( [ 0 , + ) ) (that is, τ L 1 ( [ 0 , b ] ) for every 0 < b < + ). Then u 0 , v 0 R n , there exists a unique strong global solution of the second-order continuous-time dynamical system (6).
Proof. 
Based on Lemma 2, we first prove μ ( θ ) + P Ω ( μ γ H ( μ ) ) is Lipschitz continuous and H ( μ ) = B μ + c is Lipschitz continuous. μ 1 , μ 2 Ω , L to be Lipschitz constant, d min = min i { d i } and d max = max i { d i } . We have
μ 1 + P Ω ( μ 1 γ H ( μ 1 ) ) + μ 2 P Ω ( μ 2 γ H ( μ 2 ) ) 2   μ 1 μ 2 2 + P Ω ( μ 1 γ H ( μ 1 ) ) P Ω ( μ 2 γ H ( μ 2 ) ) 2   μ 1 μ 2 2 + μ 1 γ H ( μ 1 ) μ 2 + γ H ( μ 2 ) 2   ( 2 + d max L ) μ 1 μ 2 2 .
Therefore, μ ( x ) P Ω ( μ γ H ( μ ) ) is Lipschitz continuous on Ω . Combined with Lemma 2, we confirm the existence and uniqueness of (6). □
Assumption 1.
Let τ , α : [ 0 , + ) [ 0 , + ) be locally absolutely continuous. There exists ω > 0 such that θ [ 0 , + ) , and we obtain
α ˙ ( θ ) 0 τ ˙ ( θ ) , α 2 ( θ ) τ ( θ ) γ ω + γ β + 1 .
Theorem 2.
Let H : R n R { + } be a 1 β -Lipschitz continuous gradient for β > 0   arg   min μ R n { F ( μ ) } , Z e r ( H ) . Let be γ > 0 , τ , α : [ 0 , + ) [ 0 , + ) be a function fulfilling Assumption 1 and u 0 , v 0 R n . Let μ : [ 0 , + ) R n be the unique strong global solution of second-order continuous-time dynamical system (6). The following five conclusions hold:
(i)
μ is bounded and μ ˙ , μ ¨ , μ P Ω ( μ γ H ( μ ) ) L 2 ( [ 0 , + ) ; R n ) ;
(ii)
lim θ + μ ˙ ( θ ) = lim θ + μ ¨ ( θ ) = lim θ + ( μ ( θ ) P Ω ( μ ( θ ) γ H ( μ ( θ ) ) ) ) = 0 ;
(iii)
μ ( θ ) converges weakly to a minimizer of H when θ + ;
(iv)
If μ is a minimizer of H, then H ( μ ( · ) ) H ( μ ) L 2 ( [ 0 , + ) ; R n ) , lim x + H ( μ ( θ ) ) = H ( μ ) ;
(v)
Since H is uniformly convex, μ ( θ ) converges strongly to the unique minimizer of H when x + .
Proof. (i) Take an arbitrary element ( μ arg min μ R n { H ( μ ) } = Z e r ( H ) ) and, for every θ [ 0 , + ) , consider g ( θ ) = 1 2 μ ( θ ) μ 2 , which, combined with H ( μ ) = 0 and the monotonicity of H , leads to
0 1 τ μ ¨ + α τ μ ˙ + μ μ , H ( μ ) + H ( μ ) 1 γ τ μ ¨ α γ τ μ ˙ .
Because of the co-coercivity of H , we can obtain
β H ( μ ) H ( μ ) 2 1 τ ( μ ¨ , H ( μ ) + H ( μ ) + α μ ˙ , H ( μ ) + H ( μ ) ) 1 γ τ 2 μ ¨ + α μ ˙ 2 + μ μ , 1 γ τ μ ¨ α γ τ μ ˙ .
Next, we structure the function as follows:
p ( θ ) : [ 0 , + ) R , p ( θ ) = H ( μ ( θ ) ) H ( μ ) H ( μ ) , μ ( θ ) μ .
Because of the convexity of H, we have
p ( θ ) 0 ;
therefore,
μ μ , 1 γ τ μ ¨ α γ τ μ ˙ = 1 γ τ ( g ¨ + α g ˙ μ ˙ 2 )
exists.
Finally, sorting out (10) and (11), we can obtain
β τ ( θ ) H ( μ ( θ ) ) H ( μ ) 2 + d d θ 2 ( g γ + p ) + d d θ ( α ( θ ) ( g γ + p ) ) + 1 γ d d θ ( α ( θ ) τ ( θ ) μ ˙ ( θ ) 2 ) + ( α 2 γ τ + α ˙ τ + α τ ˙ γ τ 2 1 β 1 γ ) μ ˙ 2 + 1 γ τ μ ¨ 2 0 .
Together with Assumption 1, we have
β τ ( θ ) H ( μ ( θ ) ) H ( μ ) 2 + d d θ 2 ( g γ + p ) + d d θ ( α ( θ ) ( g γ + p ) ) + 1 γ d d θ ( α ( θ ) τ ( θ ) y ˙ ( θ ) 2 ) + ω μ ˙ ( θ ) 2 + 1 γ τ ( θ ) μ ¨ ( θ ) 2 0 ,
for θ [ 0 , + ) . This implies that the function
θ d d θ ( g γ + p ) ( θ ) + α ( θ ) ( g γ + p ) ( θ ) + 1 γ ( α ( θ ) τ ( θ ) μ ˙ ( θ ) 2 )
is monotonically decreasing. Therefore, suppose D is a real number; then, for every θ [ 0 , + ) we can obtain
d d θ ( g γ + p ) ( θ ) + α ̲ ( g γ + p ) ( θ ) D .
Calculating and sorting out, we can obtain
g ( T ) γ + p ( T ) ( g ( 0 ) γ + p ( 0 ) ) exp ( α ̲ T ) + D α ̲ ( 1 exp ( α ̲ T ) ) .
Therefore, M such that
g γ + p M , g M , p M ;
thus,
μ i s b o u n d e d .
For every θ [ 0 , + ) ,
d d x ( g γ + p ) ( x ) + 1 γ ( α ̲ τ ¯ 1 μ ˙ ( x ) 2 ) D
because τ and α have bounds, so
μ ˙ i s b o u n d e d ,
and
g ˙ , p ˙ i s b o u n d e d .
Calculating and sorting out, we can obtain K such that
d d θ ( g γ + p ) ( θ ) + α ( θ ) ( g γ + p ) ( θ ) + 1 γ α τ μ ˙ 2 + ω 0 θ μ ˙ 2 d s + 1 γ τ ¯ 1 0 θ μ ¨ 2 d s K .
Via (17), we can determine that μ ˙ , μ ¨ L 2 ( [ 0 , + ) ; R n ) . Finally, from (6) and Assumption 1, we infer μ P Ω ( μ γ H ( μ ) ) L 2 ( [ 0 , + ) ; R n ) , and conclusion (i) is drawn from this.
(ii) For θ [ 0 , + ) , we can obtain
d d θ ( 1 2 μ ˙ ( θ ) 2 ) = μ ˙ ( θ ) , μ ¨ ( θ ) 1 2 μ ˙ ( θ ) 2 + 1 2 μ ¨ ( θ ) 2 ,
via Lemma 1, and for every θ [ 0 , + ) , we can obtain lim θ + μ ˙ ( θ ) = 0 . Since d d θ ( μ P Ω ( μ τ H ( μ ) ) ) L 2 ( [ 0 , + ) ; R n ) , the relation lim θ + ( μ P Ω ( μ τ H ( μ ) ) ) = 0 is derived from Lemma 1. We find that lim θ + μ ¨ ( θ ) = 0 . From (12), it follows that F ( μ ( θ ) ) H ( μ ) L 2 ( [ 0 , + ) ; R n ) , which, combined with Lemma 1, yields lim θ + H ( μ ( θ ) ) = H ( μ ) . Thus, we obtain (ii) and (iv).
(iii) In (13), lim θ + ( α ( θ ) ( g γ + q ) ) ( θ ) is monotonically decreasing. Proven by the above, we can confirm that lim θ + ( α ( θ ) ( g γ + q ) ) ( θ ) exists, and lim θ + ( g γ + q ) ( θ ) R holds.
Additionally, since μ is minimizer of H, we can determined that for all μ arg min μ R n { H ( μ ) } , the limit, i.e.,
lim θ + E ( θ , μ ) R
exists, where
E ( θ , μ ) = 1 2 γ μ ( θ ) μ 2 + H ( μ ( θ ) ) H ( μ ) H ( μ ) , μ ( θ ) μ .
Next, we use a similar approach as that described in [33] (see also ([26] Section 5.2)). Suppose ( μ ( θ n ) ) n N converges weakly to μ , where μ arg min μ R n { H ( μ ) } and θ n + (as n + ). Since ( μ ( θ n ) , H ( μ ( θ n ) ) ) G r ( H ) , lim n + H ( μ ( θ n ) ) = H ( μ ) . Since G r ( H ) satisfies weak–strong topology, we have H ( μ ¯ ) = H ( μ ) . According to μ Z e r ( H ) , combined with H ( μ ¯ ) = H ( μ ) , μ ¯ Z e r ( H ) = arg min μ R n { H ( μ ) } .
Let μ 1 , μ 2 be two weak sequential cluster points of μ ( · ) . Therefore, ( μ ( θ n ) ) n N converges weakly to μ 1 , and ( μ ( θ n ) ) n N converges weakly to μ 2 . Since μ 1 , μ 2 arg min μ R n { F ( μ ) } , we have lim θ + ( E ( θ , μ 1 ) E ( θ , μ 2 ) ) R . We find
lim θ + ( 1 γ μ ( θ ) , μ 2 μ 1 + H ( μ 2 ) H ( μ 1 ) , μ ( x ) ) R .
Similar to
1 γ μ 1 μ 2 2 + H ( μ 2 ) H ( μ 1 ) , μ 2 μ 1 = 0 ,
we can find that μ 1 = μ 2 .
(v) μ Z e r ( H ) is unique. First, let Φ H : [ 0 , + ) [ 0 , + ] be increasing, where H is uniformly monotonous. Given the monotonicity of H , we can obtain
Φ H ( F ) = Φ H ( 1 τ ( θ ) μ ¨ ( θ ) + α ( θ ) τ ( θ ) μ ˙ ( θ ) + μ ( θ ) μ ) 1 τ ( θ ) μ ¨ ( θ ) + α ( θ ) τ ( θ ) μ ˙ ( θ ) , H ( μ ( θ ) ) + H ( μ ) + μ ( θ ) μ , 1 γ τ ( θ ) μ ¨ ( θ ) α ( θ ) γ τ ( θ ) μ ˙ ( θ ) .
Since τ and α are bounded, let θ + ; hence
Φ H ( F ) = 0 ,
and we find that F converges strongly to 0 as θ + . and because of the boundedness of τ and α and (ii), we get that μ ( θ ) converges strongly to μ as θ + . Since μ ( θ ) is bounded and lim θ + H ( μ ( θ ) ) H ( μ ) = 0 ,
μ ( θ ) μ , H ( μ ( θ ) ) H ( μ ) Φ H ( μ ( θ ) μ ) θ [ 0 , + )
converge. □

Time Complexity Analysis

It is possible to estimate the time complexity of the second-order PNN algorithm proposed in the present work based on the analysis of the algorithm’s operations. A breakdown is presented below.
1
Distance Calculations: When applied in each iteration, the algorithm finds the distance between the input data point and all the codebook vectors. If there are N data points and M codebook vectors, then this step takes O(N × M × d), where d is the dimension of the data.
2
Nearest Neighbor Search: This process of finding the nearest neighbors for each of the data points involves going through the codebook vectors and the distances. This can be estimated to be O(N × M).
3
Codebook Update: To update the codebook vectors based on the nearest neighbors, computations that are proportional to the datapoint dimension (d) are performed. The overall complexity for updating all of the code books is O(N × M × d).
4
Iterations: The algorithm performs the above-mentioned operations for a certain number of times (T).
Combining these complexities, the overall time complexity of the second-order PNN algorithm is dominated by the distance calculations and codebook updates, resulting in a time complexity of O(T × N × M × d). We are aware of this disadvantage that arises from the complexity of this design when it comes to real-time applications. Some potential techniques to improve the efficiency include the following:
  • K-Means Preprocessing: Fast clustering algorithms such as K-means++ can be run before the actual PNN algorithm to provide a first approximation of the code book, which can help minimize the number of iterations (T) to be performed before convergence.
  • Nearest Neighbor Search Optimization: Using data structures such as k-d trees or ball-tree algorithms can help in the nearest neighbor search step, which is otherwise O(N × M) but can potentially be made O(N × log(M)) in some cases.
  • Parallelization: If computational resources are available, it is possible to parallelize the distance calculations, as well as the codebook update operations, across multiple cores or GPUs to achieve faster computation.
  • Early Stopping Criteria: The idea of using the thresholds of convergence for early stopping can also help to decrease the number of iterations (T) needed for the algorithm while maintaining accuracy.
Thus, by adopting these techniques, it is possible to obtain a more efficient version of the second-order PNN algorithm suitable for real-time computing.

4. Experimental Results

In this part, experiments are carried out on image restoration problems, and the proposed algorithm achieves superior performance over other algorithms for image restoration. In this part, we compare the second-order PNN with the PNN algorithm reported in [11] and second-order methods such as the SH algorithm [34] and the variable metric iterative method (VMIM) [35].
In our numerical experiments, the pixel value of each image is [0, 255]. We set the constraint of C = [ 0 , 255 ] n × n . We use the following model to represent the image recovery problem:
min x ¯ C A x ¯ y 2 ,
where x ¯ denotes the approximate matrix, y denotes the observation image, and A denotes the full-row rank matrix. To accurately represent the effect of restoring an image, we use
S N R = 20 l o g 10 x ¯ 2 x x ¯ 2 ,
where x ¯ denotes the restored image and x denotes the original image; the greater the SNR, the greater the effect of image restoration.
Figure 1 shows five public-domain images of “Mandrill”, “Bear”, “Camera”, “Chilies”, and “Humming bird” obtained from https://commons.wikimedia.org/ (accessed on 11 March 2024). Figure 2 shows the blurry images after the original image is noised.
In Figure 3, we compare the second-order PNN with the PNN algorithm, SH algorithm, and VMIM. The second-order PNN is most effective in dealing with noised images.
In Table 1, we show a comparison with SSIM, and Table 2 shows the SNR of images restored using the four algorithms. The second-order PNN is better than other methods in dealing with image restoration problems.
In order to better validate the proposed method, we conducted experiments on several public-domain medical images obtained from https://commons.wikimedia.org/ (accessed on 11 March 2024). Figure 4 shows the original medical images of “Heart” and “Spleen” from the MSD dataset. Figure 5 shows the blurry images after the original images were noised.
Table 3 shows a comparison of SSIM, and Table 4 shows the SNR of images restored using the four algorithms. Second-order PNN is better than other methods in dealing with image restoration problems in the MSD dataset.
We experimented with our algorithm on the ORL dataset. As shown Figure 6, we recovered images using the second-order PNN, PNN, the SH algorithm, and VMIM, illustrated from the top to the bottom. Obviously, compared with other methods, the second-order PNN generates more unambiguous images.
In addition to the previously reported experiments on general image restoration, we conducted further evaluations on compressed sensing reconstruction tasks frequently encountered in medical imaging. We focused on the following two specific applications:
  • CS-MRI Reconstruction: We employed our method to reconstruct magnetic resonance images from compressed measurements. We compared our approach with existing CS-MRI reconstruction algorithms using publicly available datasets. The results demonstrate that our method achieves comparable or superior reconstruction accuracy as measured by metrics like SNR and SSIM.
  • Sparse-View CT Reconstruction: We investigated the applicability of our method to sparse-view CT reconstruction, where a limited number of projection images is acquired. The results show that our method effectively recovers high-quality CT images despite the limited data, outperforming other methods in terms of visual quality and quantitative metrics.
Figure 7 shows the results of CS-MRI and sparse-view CT reconstruction for several samples. All samples include reference images for evaluation of the reconstruction results. Table 5 and Table 6 show the results of SSIM and SNR in this experiment, respectively. Our proposed method consistently achieved higher SSIM and SNR values compared to the benchmark algorithms across both CS-MRI and sparse-view CT reconstruction tasks. This indicates that our method produces reconstructions with the following characteristics:
  • Higher Fidelity: A higher SNR value signifies a lower signal-to-noise ratio in the reconstructed image. In simpler terms, it means our method introduces less noise during the reconstruction process, leading to images that are closer to the original in terms of pixel intensity values.
  • Improved Structural Similarity: A higher SSIM value implies greater structural similarity between the reconstructed image and the ground truth (original image). This metric goes beyond pixel intensity and considers factors like luminance, contrast, and structure, offering a more comprehensive evaluation of image quality.
These superior SSIM and SNR values suggest that our proposed second-order continuous-time dynamical system effectively recovers missing information and preserves structural details in the reconstructed images. While the PNN algorithm and the SH algorithm also utilize second-order methods, our approach appears to achieve a better balance between noise reduction and structural preservation, resulting in higher-fidelity reconstructions as measured by SNR and SSIM. The VMIM, despite being an iterative method, might not be as efficient in reaching the optimal solution compared to our continuous-time approach.

4.1. Entropy Analysis

This section employs Shannon entropy as a measure to compare different image restoration techniques, with the second-order PNN as one of the approaches. Shannon entropy measures the entropy or the amount of information present in an image. Higher entropy values indicate more complex images with a large number of pixels, and lower entropy values correspond to simpler images that are less complex in nature with a limited number of pixels. Mathematically, the Shannon entropy (H) of a digital image with gray levels ranging from 0 to L-1 can be expressed as follows:
H = ( p ( i ) × log 2 p ( i ) )
where i represents the intensity level of a pixel (0 i L-1) and p(i) represents the probability of encountering intensity level i in the image. Shannon entropy was used to measure the effectiveness of the image restoration methods and the proposed second-order PNN. The entropy values of the each method on each image in the dataset were obtained. The results are summarized in Table 7. Here, the first column presents the entropy of the original image, while the subsequent columns present the entropy values of the images restored by the methods under consideration. It is worth noting that each row represents an image from the dataset. The last row of the table contains the mean absolute deviation of entropy from the original image to the entropy after restoration using each of the methods. A smaller value in the “Average” row of a specific method means that the images restored by that method are closer to the entropy of the original images. This implies that the restoration process is useful in maintaining the image information content and eradicating noise or artifacts. From entropy analysis, it can be seen that the value of “Average” in Table 7 indicated for the proposed second-order PNN method is comparatively smaller than that of the other methods for all the images. This implies the possibility of the method being superior in terms of maintaining the information content of the image that is being restored. This, along with the results of the PSNR and SSIM metrics discussed earlier, supports the assertion about the efficiency of the developed PNN method.

4.2. Comparison of Proposed Second-Order PNN with Conjugate Gradient Method for Image Restoration

In this experiment, we compared the proposed second-order PNN method with the Conjugate Gradient (CG) method, which is a well-known non-linear optimization method for this research problem. To do this, the performances of the proposed method and CG in restoring all dataset images were assessed; then, the averages of PSNR and SSIM criteria obtained for both of these methods were compared. The comparisons show a significant improvement in image restoration quality using the proposed method. Our model achieved an average 15.2% increase in PSNR and an average 16.19% improvement in SSIM compared to CG, showing its superiority in image restoration. From the above analysis, it can be seen that the proposed PNN method has the following advantages in image restoration tasks. First, it uses a second-order dynamical system, which helps in achieving faster convergence of the solution compared to the iterative nature of CG. This is important in real-time applications where accuracy of the result, as well as the speed at which it is processed, is important. Secondly, image restoration usually is associated with the non-convex cost functions, which have many local minima. It can be noted that the PNN method’s framework seems to have stronger exploration characteristics, which means that it is less likely to be trapped in local minima, which results in a more globally effective solution. This is unlike CG, which may get trapped in suboptimal regions, as mentioned earlier in this paper. Nevertheless, the PNN method is subject to the following limitations. As mentioned before, the time complexity of the proposed algorithm is proportional to distance calculations and codebook updates; therefore, it is slower than CG, particularly when working with high-dimensional data. The PNN method has also many parameters that need to be set, such as the step size and learning rate, which should be properly adjusted to guarantee the method’s stability and convergence. This parameter tuning might be more difficult compared to CG, which may need less tuning. The conjugate gradient method also has its advantages, as outlined below. Computationally, it is usually less demanding than the PNN method because of the rudimentary iterative process involved in the method. Thus, we can conclude that CG is a good option for cases where computational resources are scarce. In the same regard, the CG method has a sound theoretical background and the implementation of the same is slightly easier compared to the PNN method, which requires the formulation of the second-order dynamical system framework. But CG also has some drawbacks. It can show a poorer convergence rate than the PNN method, particularly when the objective functions are complicated, such as those of image restoration. This can, in turn, cause a rise in the processing time. In addition, CG depends on gradient information, and it may converge to a local minimum of non-convex objective function and, hence, may not yield optimal image restoration.

5. Conclusions

In this work, a new second-order continuous-time dynamical system model was developed for denoising of images in image restoration problems. Compared to other techniques, this approach has several advantages. The results of the conducted experiments show the applicability of the proposed method for different image restoration tasks. With respect to the evaluation of image quality metrics, our method outperformed other existing algorithms with considerable margins. The signal-to-noise ratio was computed to be 34.78 dB, on average, over the test dataset, corresponding to a gain compared to other methods. The average SSIM was 0.959, which shows that the structural similarity of the restored images to the ground truth was high. Entropy analysis also showed that the entropy of the images restored using the proposed method was closer to the entropy of the original images, indicating that the proposed method was better in preserving information during restoration.
The current work opens avenues for further exploration in several directions, which are outlined as follows:
  • Integration with Deep Learning: The integration of the proposed dynamical system framework with deep learning structures can be further explored to enhance the image restoration capability of the suggested framework.
  • Real-Time Implementation: Methods and hardware can be developed to accelerate or search for accurate and stable numerical integration methods for real-time simulations and applications to real-world problems.
  • Generalization to Other Noise Models: The proposed method can be generalized to cover other forms of noise not included in this study to increase the scope of its use.

Author Contributions

Conception and design of study: W.W., C.W. and M.L. Acquisition of data: C.W. and M.L. Analysis and/or interpretation of data: W.W. Drafting the manuscript: W.W. Revising the manuscript critically for important intellectual content: W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

All datasets used in this study are available upon request from the first authors.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 4, 586–597. [Google Scholar] [CrossRef]
  2. Wright, J.; Young, A.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Recogn. Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef]
  3. Yan, S.; Wang, H. Semi-supervised learning by sparse representation. Proc. SIAM Int. Conf. Data Min. 2009, 3, 792–801. [Google Scholar] [CrossRef]
  4. Niebles, J.C.; Wang, H.; Fei-Fei, L. Unsupervised learning of human action categories using spatial-temporal words. Int. J. Comput. 2008, 9, 299–318. [Google Scholar] [CrossRef]
  5. Sutton, J.L. Underwater acoustic imaging. Proc. IEEE 1979, 64, 554–566. [Google Scholar] [CrossRef]
  6. Donoho, D.; Tsaig, Y. Fast solution of l1-norm minimization problems when the solution may besparse. IEEE Trans. Inf. Theory 2008, 54, 4789–4812. [Google Scholar] [CrossRef]
  7. Donoho, D.L. For most large underdetetermined systems of linear equations, the minimal norm solution is also the sparsest solution. Commun. Pure Appl. Math. 2006, 59, 797–829. [Google Scholar] [CrossRef]
  8. Chen, S.S.; Donoho, D.L.; Saunders, M. Atomic decomposition by basis pursuit. SIAM J. 1999, 20, 33–61. [Google Scholar] [CrossRef]
  9. Donoho, D.L.; Huo, X. Nertainty principles and ideal atomic decomposition. IEEE Trans. Inf. 2001, 47, 2845–2862. [Google Scholar] [CrossRef]
  10. Cai, T.; Xu, G.; Zhang, J. On recovery of sparse signals via l1 minimization. IEEE Trans. Inf. Theory 2009, 55, 3388–3397. [Google Scholar] [CrossRef]
  11. Liu, Y.; Hu, J. A neural network for l1l2 minimization based on scaled gradient projection: Application to compressed sensing. Neurocomputing 2015, 173, 988–993. [Google Scholar] [CrossRef]
  12. Izuchukwu, C.; Reich, S.; Shehu, Y.; Taiwo, A. Strong convergence of forward-reflected- backward splitting methods for solving monotone inclusions with applications to image restoration and optimal control. J. Sci. Comput. 2023, 94, 73. [Google Scholar] [CrossRef]
  13. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Infor. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef]
  14. Tropp, J.A. Just relax: Convex programming methods for identifying sparse signals. IEEE Trans. Inf. Theory 2006, 54, 4789–4812. [Google Scholar] [CrossRef]
  15. Becker, S.; Bobin, J.; Cands, E.J. NESTA: A fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 2011, 4, 1–39. [Google Scholar] [CrossRef]
  16. Hale, E.; Yin, W.; Zhang, Y. Fixed-Point Continuation Method for Regularized Minimization with Applications to Compressed Sensing. 2007, pp. 1–45. Available online: https://www.cmor-faculty.rice.edu/~zhang/reports/tr0707.pdf (accessed on 11 June 2024).
  17. Daubechies, I.; Defrise, M.; Mol, C.D. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
  18. Donoho, D.; Elad, M.; Temlyakov, V. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory 2006, 52, 6–18. [Google Scholar] [CrossRef]
  19. Tomioka, R.; Sugiyama, M. Dual-augmented lagrangian method for efficient sparse reconstruction. IEEE Signal Process. Lett. 2009, 16, 1067–1070. [Google Scholar] [CrossRef]
  20. Wang, Y.; Zhou, G.; Caccetta, L.; Liu, W. An alternative lagrange-dual based algorithm for sparse signal reconstruction. IEEE Trans. Signal Process. 2011, 95, 1895–1901. [Google Scholar] [CrossRef]
  21. He, X.; Li, C.; Huang, T.; Li, C.; Huang, J. A recurrent neural network for solving bilevel linear programming problem. IEEE Trans. Neural Netw. Learn. Syst. 2013, 25, 824–830. [Google Scholar] [CrossRef]
  22. Xu, B.; Liu, Q.; Huang, T. A discrete-time projection neural network for sparse signal reconstruction with application to face recognition. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 151–162. [Google Scholar] [CrossRef] [PubMed]
  23. Cheng, L.; Hou, Z.G.; Lin, Y.; Tan, M.; Zhang, W.C.; Wu, F. Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks. IEEE Trans. Neural Netw. 2011, 22, 714–726. [Google Scholar] [CrossRef] [PubMed]
  24. Hu, X.; Wang, J. Solving the assignment problem using continuoustime and discrete-time improved dual networks. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 821–827. [Google Scholar] [CrossRef]
  25. Liu, Q.; Wang, J. A projection neural network for constrained quadratic minimax optimization. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2891–2900. [Google Scholar] [CrossRef] [PubMed]
  26. Abbas, B.; Attouch, H. Dynamical systems and forward-backward algorithms associated with the sum of a convex subdifferential and a monotone cocoercive operator. Optimization 2015, 64, 2223–2252. [Google Scholar] [CrossRef]
  27. Alvarez, F. The minimizing property of a second order dissipative system in hilbert spaces. SIAM J. Control Optim. 2000, 38, 1102–1119. [Google Scholar] [CrossRef]
  28. Alvarez, F.; Attouch, H.; Bolte, J.; Redont, P. Second-order gradient-like dissipative dynamical system with hessian-driven damping. application to optimization and mechanbics. J. Math. Pures Appl. 2002, 81, 747–779. [Google Scholar] [CrossRef]
  29. Attouch, H.; Mainge, P.E. Symptotic behavior of second-order dissipative evolution equations combining potential with non-potential effects. ESAIM Control Optim. Calc. 2011, 17, 836–857. [Google Scholar] [CrossRef]
  30. Bot, R.I.; Csetnek, E. Second order forword-backword dynamical system for monotone inclusion problems. SIAM J. Control Optim. 2016, 54, 1423–1443. [Google Scholar] [CrossRef]
  31. Ding, Z.; Zhang, X.; Tu, Z.; Xia, Z. Restoration by Generation with Constrained Priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 2567–2577. [Google Scholar] [CrossRef]
  32. Yang, Z.; Xia, J.; Li, S.; Huang, X.; Zhang, S.; Liu, Z.; Fu, Y.; Liu, Y. A Dynamic Kernel Prior Model for Unsupervised Blind Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 17–21 June 2024; pp. 26046–26056. [Google Scholar] [CrossRef]
  33. Bolte, J. Continuous gradient projection method in hilbert spaces. J. Optim. Theory Appl. 2003, 119, 235–259. [Google Scholar] [CrossRef]
  34. Sahu, D.; Cho, Y.; Dong, Q.; Kashyap, M.; Li, X. Inertial relaxed cq algorithms for solving a split feasibility problem in hilbert spaces. Numer. Algorithms 2021, 7, 1075–1095. [Google Scholar] [CrossRef]
  35. Suparatulatorn, R.; Charoensawan, P.; Poochinapan, K. Inertial self-adaptive algorithm for solving split feasible problems with applications to image restoration. Math. Meth. Appl. 2019, 42, 7268–7284. [Google Scholar] [CrossRef]
  36. Commons. The Collection of Freely Usable Media Files in Public Domain’. Available online: https://commons.wikimedia.org (accessed on 11 March 2024).
Figure 1. Original images [36].
Figure 1. Original images [36].
Mathematics 12 02360 g001
Figure 2. Blurred images.
Figure 2. Blurred images.
Mathematics 12 02360 g002
Figure 3. Comparison of images recovered using different algorithms.
Figure 3. Comparison of images recovered using different algorithms.
Mathematics 12 02360 g003
Figure 4. Original images [36].
Figure 4. Original images [36].
Mathematics 12 02360 g004
Figure 5. Blurred images.
Figure 5. Blurred images.
Mathematics 12 02360 g005
Figure 6. Comparison of images recovered using different algorithms.
Figure 6. Comparison of images recovered using different algorithms.
Mathematics 12 02360 g006
Figure 7. Comparison of medical images reconstructed using different algorithms. The first and second rows show CS-MRIs of the liver and brain, respectively; the third and fourth rows are sparse-view CTs of lung and brain tissue, respectively.
Figure 7. Comparison of medical images reconstructed using different algorithms. The first and second rows show CS-MRIs of the liver and brain, respectively; the third and fourth rows are sparse-view CTs of lung and brain tissue, respectively.
Mathematics 12 02360 g007
Table 1. Comparison of SSIM using four algorithms.
Table 1. Comparison of SSIM using four algorithms.
SSIM
 Second-Order PNNPNNSH AlgorithmVMIM
Mandrill0.98100.92550.87050.8948
Bear0.96360.86570.82360.8455
Camera0.97890.93180.88660.9066
Chilies0.99320.92570.88790.9458
Humming bird0.94200.89660.85740.8966
Table 2. SNR comparison of images recovered using the four algorithms.
Table 2. SNR comparison of images recovered using the four algorithms.
SNR(dB)
 Second-Order PNNPNNSH AlgorithmVMIM
Mandrill38.854033.337631.126631.9444
Bear35.555829.951528.508229.2690
Camera34.092029.163426.760827.7594
Chilies38.265428.263626.551629.5649
Humming bird31.229328.694427.168028.7004
Table 3. Comparison of SSIM of the four algorithms.
Table 3. Comparison of SSIM of the four algorithms.
SSIM
 Second-Order PNNPNNSH AlgorithmVMIM
Hand X-Ray0.96120.87960.80190.8376
Chest X-Ray0.86390.83100.79160.8112
Brain MRI0.97490.90590.83960.8692
Lung CT0.98870.86710.80110.9032
Spine MRI0.94200.88830.80930.8881
Table 4. Comparison the SNR of images recovered using the four algorithms.
Table 4. Comparison the SNR of images recovered using the four algorithms.
SNR(dB)
 Second-Order PNNPNNSH AlgorithmVMIM
Hand X-Ray33.316528.331725.751426.9016
Chest X-Ray30.221829.059527.211728.2230
Brain MRI35.702629.700526.775928.0624
Lung CT37.485727.454025.660928.7853
Spine MRI33.076030.435627.858030.4226
Table 5. Comparison of SSIM using four algorithms for medical images.
Table 5. Comparison of SSIM using four algorithms for medical images.
SSIM
 Second-Order PNNPNNSH AlgorithmVMIM
Liver CS-MRI0.89590.84450.72480.7901
Brain CS-MRI0.96010.93730.87340.9109
Lung SV-CT0.99870.98510.97610.9895
Lung SV-CT0.84220.77300.73630.7912
Table 6. Comparison of the SNR of medical images recovered using the four algorithms.
Table 6. Comparison of the SNR of medical images recovered using the four algorithms.
SNR
 Second-Order PNNPNNSH AlgorithmVMIM
Liver CS-MRI28.081526.242623.232724.7944
Brain CS-MRI32.642230.458126.527628.5781
Lung SV-CT46.856836.798934.884338.1832
Brain SV-CT30.628429.067328.450629.4119
Table 7. Results obtained through entropy analysis.
Table 7. Results obtained through entropy analysis.
Entropy Analysis
 OriginalSecond-Order PNNPNNSH AlgorithmVMIM
Mandrill6.666.67236.68276.71526.6943
Bear7.39697.36887.36437.35097.3587
Camera7.1867.16597.10747.09847.1182
Chilies7.78457.7247.72277.71977.7242
Humming bird7.36197.35197.32317.31537.3282
Hand X-Ray7.06226.99796.99146.98056.9812
Chest X-Ray7.22657.13827.12647.10537.1148
Brain MRI6.52636.53126.51646.51546.5224
Lung CT7.49517.22977.22817.19317.2371
Spine MRI7.14217.11697.06167.04827.0716
Average difference-0.057870.076240.090950.0759
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, W.; Wang, C.; Li, M. A Second-Order Continuous-Time Dynamical System for Solving Sparse Image Restoration Problems. Mathematics 2024, 12, 2360. https://doi.org/10.3390/math12152360

AMA Style

Wang W, Wang C, Li M. A Second-Order Continuous-Time Dynamical System for Solving Sparse Image Restoration Problems. Mathematics. 2024; 12(15):2360. https://doi.org/10.3390/math12152360

Chicago/Turabian Style

Wang, Wenjie, Chunyan Wang, and Mengzhen Li. 2024. "A Second-Order Continuous-Time Dynamical System for Solving Sparse Image Restoration Problems" Mathematics 12, no. 15: 2360. https://doi.org/10.3390/math12152360

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop