Next Article in Journal
Application of Time-Controlled Critical Point in Pressure Reducing Valves: A Case Study in North Spain
Next Article in Special Issue
Hyperspectral Anomaly Detection Based on Multi-Feature Joint Trilateral Filtering and Cooperative Representation
Previous Article in Journal
Prediction of Urinary Tract Infection in IoT-Fog Environment for Smart Toilets Using Modified Attention-Based ANN and Machine Learning Algorithms
Previous Article in Special Issue
Detection of Bad Stapled Nails in Wooden Packages
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Graduated Non-Convexity Technique for Dealing Large Point Spread Functions

1
Dipartimento di Matematica e Informatica, Università degli Studi di Perugia, Via Vanvitelli, 1, I-06123 Perugia, Italy
2
Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi di Firenze, Viale Morgagni, 67/a, I-50134 Firenze, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(10), 5861; https://doi.org/10.3390/app13105861
Submission received: 29 March 2023 / Revised: 23 April 2023 / Accepted: 4 May 2023 / Published: 9 May 2023
(This article belongs to the Special Issue Signal and Image Processing: From Theory to Applications)

Abstract

:
This paper focuses on reducing the computational cost of a GNC Algorithm for deblurring images when dealing with full symmetric Toeplitz block matrices composed of Toeplitz blocks. Such a case is widespread in real cases when the PSF has a vast range. The analysis in this paper centers around the class of gamma matrices, which can perform vector multiplications quickly. The paper presents a theoretical and experimental analysis of how γ -matrices can accurately approximate symmetric Toeplitz matrices. The proposed approach involves adding a minimization step for a new approximation of the energy function to the GNC technique. Specifically, we replace the Toeplitz matrices found in the blocks of the blur operator with γ -matrices in this approximation. The experimental results demonstrate that the new GNC algorithm proposed in this paper reduces computation time by over 20 % compared with its previous version. The image reconstruction quality, however, remains unchanged.

1. Introduction

This paper addresses the problem of reconstructing blurry and noisy images. In particular, we consider blur operators that have a PSF (point spread function) with a vast domain. This case is pervasive; for example, it is common in underwater images (cf. [1,2]). Figure 1 shows the point spread function of the Hubble space telescope camera before NASA corrections. The non-blind problem of restoring images consists of estimating the original image, starting from the observed image and the known blur. This problem is ill-posed in the Hadamard sense (cf. [3]). However, using various regularization techniques, it is possible to turn this problem into a well-posed one that can be solved by minimizing an energy function (cf. [4,5,6]). This function consists of two terms, one that ensures the solution fits the observed data and another that enforces the regularity of the solution.
To produce more realistic restored images, we consider the discontinuities in natural images, particularly around edges where different objects meet (cf. [7]). A possible approach is to use an energy function, which implicitly assumes these discontinuities (cf. [4,5,6,8]). This energy function has a non-convex regularization term. Moreover, to improve the quality of the restored images, it is possible to add constraints that prevent thick boundaries from forming between smooth areas (cf. [4,5]). Thus, the resulting energy function is not convex and cannot be minimized using traditional gradient descent optimization algorithms. Stochastic or deterministic techniques can minimize such a non-convex energy function. By the former methods, it is possible to obtain very accurate results; however, their computational cost is very high (cf. [7]). Deterministic algorithms do not ensure convergence to the ideal solution but allow to obtain adequate reconstruction in lower computational times (cf. [9,10]). The GNC (Graduated Non-Convexity) is one of the most widely used deterministic methods for edge-preserving reconstruction. This technique approximates the energy function using a sequence of approximations that converge to the original one and then solves each approximation using a classical optimization algorithm using the minimum found in the previous approximation as a starting point.
In [8], Blake and Zisserman propose the first GNC algorithm dealing with the denoising problem. Bedini, Gerace, and Tonazzini in [11] present an extension of GNC to restore noisy images, considering the discontinuities’ geometry. Nikolova in [12] proposes a GNC technique to restore noisy blurred images. Boccuto, Gerace, and Pucci in [5] present a GNC algorithm for the deblurring and the denoising imposing the constraint of the line continuation or the non-parallelism constraint alternatively; such an algorithm is referred to as CATILED (Convex Approximation Technique for Interacting Line Elements Deblurring). In this paper, we extend the CATILED technique, in the case of the non-parallelism constraint, to deal with PSFs with a large domain.
The GNC technique has recently been applied to solve many other applications, such as solving the combinatorial data analysis problem of seriation (cf. [13]), stochastic problems (cf. [14]), solving the combinatorial optimization problems defined on the set of partial permutation matrices (cf. [15]), solving the maximum a posteriori inference problem (cf. [16]), pose estimation (cf. [17]), and spatial perception (cf. [18]).
When the blur matrix is full, the computational cost of a GNC algorithm increases considerably; however, this technique remains a good compromise between non-edge-preserving techniques that yield low-accuracy results in a low computational time and stochastic techniques that give qualitatively accurate results with much higher computational cost. Experimental evidence suggests that the most computationally expensive minimization is the first energy function approximation since subsequent minimizations start from a good solution approximation. In this paper, we propose a method to minimize the first convex approximation by approximating each block of the blur operator using matrices that can be efficiently treated using a fast discrete transform. Since each block of the blur operator is a symmetric Toeplitz matrix, we here focus on finding a class of matrices that is easy to handle computationally while providing a good approximation of the Toeplitz matrices. Specifically, we approximate each Toeplitz matrix as the sum of a symmetric circulant and a reverse circulant matrix (cf. [19]). Symmetric circulant matrices have several applications in ordinary and partial differential equations (cf. [20,21,22,23,24]), images and signal restoration (cf. [25,26]), and graph theory (cf. [27,28,29,30,31,32]). Reverse circulant matrices have different applications, such as exponential data fitting and signal processing (cf. [33,34,35,36,37]).
By theoretical and experimental results, we choose a suitable subclass of matrices to use for our approximations. This subclass of matrices is the set of the γ -matrices presented in [38]. We tested the proposed algorithm in reconstructing artificially blurred images and those affected by natural blurring. These experiments show how using such approximations reduces about a fifth of the CATILED algorithm’s computational costs without affecting the result’s quality. We refer to the technique here proposed as E–CATILED (Extended Convex Approximation Technique for Interacting Line Elements Deblurring).
The paper is structured as follows: in Section 2, we present the problem of image deblurring and the related regularization technique; in Section 3, we recall the CATILED algorithm for the minimization of the energy function; in Section 4, we present the proposed E–CATILED technique; in Section 5, we report our experimental results.

2. Regularization of the Problem

The formulation of the image generation direct problem is
y = A ^ x + n ,
where the n 2 -dimensional vectors x , y are, respectively, the original and the observed image. We assume that all intensity values are in one column in lexicographic order. The n 2 -dimensional vector n expresses the additive noise on the image, which we assume to be independent and identically distributed Gaussian, with zero mean and known variance σ ^ 2 .
The n 2 × n 2 matrix A ^ represents a translation-invariant blur operation on an image. This operation involves computing a light-intensity weighted average of the neighboring pixels of each pixel in the original image and assigning the result to that pixel in the blurred image. To define the matrix A ^ , we use a matrix M R ( 2 h ^ + 1 ) × ( 2 h ^ + 1 ) called blur mask, and we compute the entries of matrix A ^ as
a ( i , j ) , ( i + w , j + v ) = m h ^ + 1 + w , h ^ + 1 + v , if | w | , | v | h ^ , 0 , otherwise .
Here, in lexicographic notation, the generic index ( i , j ) , ( k , l ) of matrix A ^ is supposed to be equal to ( j 1 ) n + i , ( l 1 ) n + k . Namely, the blur mask determines the weighting factors used in the weighted averaging operation. Thus, the matrix A ^ becomes a block Toeplitz matrix with Toeplitz blocks (cf. [39]). Note that the size of the blur mask 2 h ^ + 1 corresponds to the size of the domain of the PSF (point spread function). If we assume that the blur operator is symmetric in the horizontal and in the vertical direction and the domain of the PSF is vast (that is 2 h ^ + 1 n ), then the full matrix A ^ is symmetric.
The image restoration problem consists of finding an estimation x of the unknown original image given the blurred image y , the matrix A ^ , and the variance of the noise σ 2 . This problem is ill-posed in the Hadamard sense; therefore, to solve the problem, some regularization techniques are necessary. Using the second-order difference operators in a regularization technique allows for significantly better results than those obtained by first-order difference operators (cf. [5]). On the other hand, using third-order difference operators yields slightly better results than those obtained with second-order difference operators to the detriment of an excessive increase in computational costs. Therefore, we use second-order difference operators.
A clique c of order 2 is the subset of points of a square grid on which the second-order finite difference is defined. We denote by C the set of all cliques of order 2. More precisely, we consider
C = { c = { ( i , j ) , ( h , l ) , ( r , q ) } : i = h = r , j = l + 1 = q + 2 , or i = h + 1 = r + 2 , j = l = q } .
We denote by D c x the second-order finite difference operator of the vector x associated with the clique c, that is, if c = { ( i , j ) , ( h , l ) , ( r , q ) } C , then
D c x = x i , j 2 x h , l + x r , q .
Let us introduce the concept of adjacent clique of order 2, used to define the non-parallelism constraint, whose importance is apparent in Figure 2. The blurred image appears in (a); (b) is reconstructed from (a) without imposing the non-parallelism constraint, while the image in (d) is obtained by enforcing it. Although the reconstructions of Figure 2b,d appear similar to the human eye, the underlying quality for the latter is higher, as visible in the corresponding line process plots (c) and (e).
Given a vertical clique
c = { ( i , j ) , ( i + 1 , j ) , ( i + 2 , j ) } , i = 3 , , n 2 , j = 1 , , n ,
we define its preceding clique c 1 as follows:
c 1 = { ( i 2 , j ) , ( i 1 , j ) , ( i , j ) } .
If c is a horizontal clique,
c = { ( i , j ) , ( i , j + 1 ) , ( i , j + 2 ) } , i = 1 , , n , j = 3 , , n 2 ,
then its preceding clique c 1 is defined by setting
c 1 = { ( i , j 2 ) , ( i , j 1 ) , ( i , j ) } .
A regularized solution  x ˜ is defined as a minimizer of the following energy function (cf. [5]).
E ( x ) = y A ^ x 2 + c C ψ ( D c ( x ) , D c 1 ( x ) ) ,
where
ψ ( t 1 , t 2 ) = g ¯ ( t 1 , 0 ) , if | t 2 | < s = α ^ λ ^ , g ¯ ( t 1 , ε ^ ) , if | t 2 | s ,
and
g ¯ ( t , k ) = λ ^ 2 t 2 , if | t | < α ^ + k λ ^ , α ^ + k , if | t | α ^ + k λ ^ .
The first term of the energy function E in (1) is the so-called data consistency term, while the second additive term in (1) is the smoothness term.
Note that the free parameters in the energy function in (1) are λ ^ , α ^ , and ε ^ . The parameter λ ^ plays the role of adjusting the degree of smoothness of the solution; α ^ represents the cost to add a discontinuity to the estimated solution, while ε ^ is an extra cost for an adjacent parallel discontinuity. A correct value of these parameters allows for more accurate reconstructions (cf. [40]). Figure 3c presents the function ψ considering λ ^ = 1 , α ^ = 80 , and ε ^ = 80 .

3. CATILED Technique

This section presents the CATILED (Convex Approximation Technique for Interacting Line Elements Deblurring) algorithm presented in [5]. Such an algorithm is a GNC (Graduated Non-Convexity) technique (cf. [4,5,8,41,42,43]) that allows minimization of the energy function E given in (1). It is simple to verify that such a function is non-convex. In order to use a gradient descent technique, it is necessary to determine an appropriate initial point near the globular optimum. For this purpose, a GNC technique constructs a family of approximations of the energy function { E ( p ) } p such that the first approximation is convex and the last corresponds to the non-convex function. Then, the following algorithm finds an approximation of the global minimum using the family { E ( p ) } p .
initialize x ;
while E ( p ) E do
  •   find the minimum of the function E ( p ) starting from the initial point x ;
  •    x = arg min E ( p ) ;
  •   update the parameter p.
It is immediate to verify that the first term of the energy function E in (1), called the data consistency term, is convex. Then, finding a first convex approximation of the function E reduces to determining a convex approximation of the second additive term in (1), called the smoothness term.
In [5], the authors first determine a C 1 ( R 2 ) family of approximation of the function ψ in (2). Namely, ψ ( 0 ) ψ , and for p ( 0 , 1 ] ,
ψ ( p ) ( t 1 , t 2 ) = g ¯ ( p ) ( t 1 , 0 ) , if t 2 s , a ( p ) ( t 1 ) ( t 2 s ) 2 + g ¯ ( p ) ( t 1 , 0 ) , if s < t 2 u ( p ) + s 2 , a ( p ) ( t 1 ) ( t 2 u ( p ) ) 2 + g ¯ ( p ) ( t 1 , ε ^ ) , if u ( p ) + s 2 < t 2 < u ( p ) , g ¯ ( p ) ( t 1 , ε ^ ) , otherwise ,
where u ( p ) = s + p z , with an arbitrary z > 0 .
The function g ¯ ( p ) ( t , k ) is
g ¯ ( p ) ( t , k ) = λ ^ 2 t 2 if | t | < q p ( k ) , α ^ + k τ ( p ) 2 ( | t | r p ( k ) ) 2 if q p ( k ) | t | r p ( k ) , α ^ + k if | t | > r p ( k ) ,
where
q p ( k ) = α ^ + k λ ^ 2 2 τ ( p ) + 1 λ ^ 2 1 / 2 ,
with τ ( p ) = τ * / p , where τ * > 0 is an arbitrary constant, and
r p ( k ) = α ^ + k λ ^ 2 q p ( k ) .
The function a ( p ) ( t ) is
a ( p ) ( t ) = 2 g ¯ ( p ) ( t , ε ^ ) g ¯ ( p ) ( t , 0 ) [ u ( p ) s ] 2 .
Thus, the first convex approximation of ψ of class C 1 ( R 2 ) in the CATILED technique is
ψ ( 2 ) ( t 1 , t 2 ) = = λ ^ 2 t 1 2 , if t 1 < q 1 ( 0 ) , 2 λ ^ 2 q 1 ( 0 ) | t 1 | λ ^ 2 q 1 2 ( 0 ) , if t 1 q 1 ( 0 ) .
For p [ 1 , 2 ] ,
ψ ( p ) = ( p 1 ) ψ ( 2 ) + ( 2 p ) ψ ( 1 ) ,
where ψ ( 1 ) and ψ ( 2 ) are given in (3) and (4) respectively. In the CATILED algorithm, the parameter p varies from 2 to 0 with a fixed step h.
Note that all variables used in this section are necessary to make all approximations ψ ( p ) of the function ψ , for p ( 0 , 2 ] , of class C 1 ( R 2 ) (see [5] for details). In order to obtain a graphical view, Figure 3a–c show the graphs of the functions ψ ( 2 ) , ψ ( 1 ) , and ψ ( 0 ) ψ when λ ^ = 1 , α ^ = 80 and ε ^ = 80 .
The different approximations are minimized by the NL-SOR (Non-Linear Successive Over Relation) algorithm (cf. [5,8]). In this algorithm, in each iteration, the solution is updated along the opposite direction of the gradient of the energy function. Thus, the current solution should be multiplied by A ^ T A ^ in the computation of the data consistency term component of the gradient. Since A ^ is a full matrix, this operation is extremely costly.

4. E–CATILED Technique

It is possible to verify experimentally that the more expensive minimization is the first since the others start from a good solution approximation. Hence, in this paper, when we minimize the first convex approximation, we propose to approximate every block of the operator A ^ through matrices whose products can be computed by a suitable fast discrete transform.
Since every block of A ^ is a symmetric Toeplitz matrix, we now deal with determining a class of matrices that is easy to handle from the computational point of view that provide a good approximation of the Toeplitz matrices. In particular, in this paper, we approximate each Toeplitz matrix by the sum between a symmetric circulant and a reverse circulant matrix.

4.1. Spectral Characterization of β -Matrices

Given A C n × n , we below denote by A * the transpose conjugate of A. In this and the following subsection, for simplicity of notation, we consider the indices of the n × n -matrices and n-vectors to vary between 0 and n 1 . We begin with presenting a class of simultaneously diagonalizable matrices recently proposed in [38]. Let n be a fixed positive integer, and Q n = ( q k , j ) k , j R n × n , where
q k , j = α j cos 2 π k j n if 0 j n / 2 , α j sin 2 π k ( n j ) n if n / 2 j n 1 ,
and
α j = 1 n = α ¯ if j = 0 , or j = n / 2 if n is even , 2 n = α ˜ otherwise .
Set
Q n = q ( 0 ) q ( 1 ) q ( n 2 ) q ( n + 1 2 ) q ( n 2 ) q ( n 1 ) ) ,
with
q ( 0 ) = 1 n 1 1 1 T = 1 n u ( 0 ) ,
q ( j ) = 2 n 1 cos 2 π j n cos 2 π j ( n 1 ) n T = 2 n u ( j ) , q ( n j ) = 2 n 0 sin 2 π j n sin 2 π j ( n 1 ) n T = 2 n v ( j ) ,
j = 1 , 2 , , n 1 2 . If n is even, we have
q ( n / 2 ) = 1 n 1 1 1 1 1 T = 1 n u ( n / 2 ) .
Note that Q n is an orthonormal matrix (cf. [44]).
Given a vector λ C n , λ = ( λ 0 λ 1 λ n 1 ) T , we define
diag ( λ ) = Λ = λ 0 0 0 0 0 0 λ 1 0 0 0 0 0 λ 2 0 0 0 0 0 λ n 2 0 0 0 0 0 λ n 1 C n × n .
We recall that a vector λ R n , λ = ( λ 0 λ 1 λ n 1 ) T is said to be symmetric if λ j = λ n j for j = 0 , 1 , , n / 2 ; otherwise, λ is said to be asymmetric if λ j = λ n j for j = 0 , 1 , , n / 2 .
Let G n be the space of the following simultaneously diagonalizable matrices
G n = sd ( Q n ) = { Q n Λ Q n T : Λ = diag ( λ ) , λ R n } .
The elements of such a class are called γ–matrices. Moreover, in [38], the following classes are presented
C n = { Q n Λ Q n T : Λ = diag ( λ ) , λ R n , λ is symmetric } ,
B n = { Q n Λ Q n T : Λ = diag ( λ ) , λ R n , λ is asymmetric } ,
It is possible to see that G n is a matrix algebra of dimension n, C n is a subalgebra of G n of dimension n 2 + 1 , and B n is a linear subspace of G n of dimension n 1 2 (see [45]). The following results hold.
Proposition 1
([45]). One has
G n = C n B n ,
whereis the orthogonal sum and · , · denotes the Frobenius product, defined by
G 1 , G 2 = tr ( G 1 T G 2 ) , G 1 , G 2 G n ,
where t r ( G ) is the trace of the matrix G.
We recall the definition of the classical Hartley matrix (see also [19] and the references therein). If n is odd, we have
H n = 1 n u ( 0 ) u ( 1 ) + v ( 1 ) u n 1 2 + v n 1 2 u n 1 2 v n 1 2 u ( 1 ) v ( 1 ) .
When n is even, we obtain
H n = 1 n u ( 0 ) u ( 1 ) + v ( 1 ) u n 2 1 + v n 2 1 u n 2 u n 2 1 v n 2 1 u ( 1 ) v ( 1 ) .
It is not difficult to see that
H n = Q n Y n ,
where Y n = ( y k , j ) k , j R n × n , is a diagonal matrix with
y k , j = 1 if k = j = 0 , 1 2 if k = j and 1 k n 1 2 , 1 2 if k + j = n and 1 k n 1 , 1 2 if k = j and n + 1 2 k n 1 , 0 otherwise
if n is odd, and
y k , j = 1 if k = j = 0 or k = j = n 2 , 1 2 if k = j and 1 k n 2 1 , 1 2 if k + j = n and 1 k n 1 , 1 2 if k = j and n 2 + 1 k n 1 , 0 otherwise
if n is even. Now, set
H n = sd ( H n ) = { H n Λ H n T : Λ = diag ( λ ) , λ R n } .
It is not difficult to see that
C n = { Q n Λ Q n T : Λ = diag ( λ ) , λ R n , λ is symmetric } 0 = { H n Λ H n T : Λ = diag ( λ ) , λ R n , λ is symmetric } .
From (9) and (10), it follows that
H n = C n F n ,
where
F n = { H n Λ H n T : Λ = diag ( λ ) , λ R n , λ is asymmetric } .
The Fourier matrix is defined by F n = ( f k , l ) k , l C n × n , where
f k , l = 1 n ω n k l , k , l = 0 , 1 , , n 1 ,
with ω n = e 2 π i n . Let W n be the space of all real matrices simultaneously diagonalizable by F n , that is,
W n = sd ( F n ) = { F n Λ F n * R n × n : Λ = diag ( λ ) , λ C n } .
It is not difficult to see that W n is a commutative matrix algebra. Moreover, we define the following class:
A n = { F n Λ F n * : Λ = diag ( λ ) , λ ( i R ) n , λ is asymmetric } .
Finally, we define the β-matrices as the matrices belonging to the following set:
V n = C n B n F n A n .

4.2. Structural Characterizations of β -Matrices

In this subsection, we show that V n coincides with the direct sum of the sets of all real circulant matrices and of all reverse circulant matrices.
We consider the set of families
L n , k = { A R n × n : there is a = ( a 0 a n 1 ) T R n with a l , j = a ( j + k l ) mod n } , K n , k = { A R n × n : there is a symmetric a = ( a 0 a n 1 ) T R n with a l , j = a ( j + k l ) mod n } , J n , k = { A R n × n : there is a symmetric a = ( a 0 a n 1 ) T R n with t = 0 n 1 a t = 0 , t = 0 n 1 ( 1 ) t a t = 0 when n is even , and a l , j = a ( j + k l ) mod n } ,
where k { 1 , 2 , , n 1 } .
When k = n 1 , L n , n 1 is the class of all real circulant matrices, that is, the family of those matrices C R n × n such that every row, after the first, has the elements of the previous one shifted cyclically one place right (see, e.g., [46]).
Given a vector c R n , c = ( c 0 c 1 c n 1 ) T , let us define
circ ( c ) = C = c 0 c 1 c 2 c n 2 c n 1 c n 1 c 0 c 1 c n 3 c n 2 c n 2 c n 1 c 0 c n 4 c n 3 c 2 c 3 c 4 c 0 c 1 c 1 c 2 c 3 c n 1 c 0 ,
where C L n , n 1 .
Theorem 1
(Theorems 3.2.2 and 3.2.3 in [46]). The following result holds:
W n = L n , n 1 .
As a consequence of this theorem, we obtain that the n eigenvectors of every circulant matrix C R n × n are given by
w ( j ) = ( 1 ω n j ω n 2 j ω n ( n 1 ) j ) T ,
and the eigenvalues of a matrix C = circ ( c ) F n are expressed by
λ j = c T w ( j ) = k = 0 n 1 c k ω n j k , j = 0 , 1 , , n 1 .
Now, we present some results about symmetric circulant real matrices. Observe that if C = circ ( c ) , with c R n , then C is symmetric if and only if c is symmetric. Thus, the class of all real symmetric circulant matrices coincides with K n , n 1 and has dimension n 2 + 1 over R .
Theorem 2
(see, e.g., (§4 in [27]), (Lemma 3 in [44])). Let C K n , n 1 . Then, the set of all eigenvectors of C can be expressed as { q ( 0 ) , q ( 1 ) , …, q ( n 1 ) } , where q ( j ) , j = 0 , 1 , , n 1 , is as in(6)–(8).
Note that from Theorem 2 it follows that the set of all real symmetric circulant matrices is contained in G n . The next result holds.
Theorem 3.
(see, e.g., (§1.2 in [47]), (§4 in [27]), (Theorem 1 in [48])). Let C = circ ( c ) K n , n 1 . Then, the eigenvalues λ j of C are given by
λ j = c T u ( j ) , j = 0 , 1 , , n 2 ,
where the u ( j ) ’s are as in (7). Moreover, for j = 1 , 2 , , n 1 2 it is
λ j = λ n j .
From Theorem 3, it follows that, if C is a real symmetric circulant matrix and λ ( C ) is the set of its eigenvalues, then λ ( C ) is symmetric, thanks to (11). Hence, K n , n 1 C n . Thus, C n coincides with the class of symmetric circulant matrices K n , n 1 since these two vector spaces have the same dimension.
If k = 1 , then L n , 1 is the set of all real reverse circulant (or real anti-circulant) matrices, which is the class of all matrices B R n × n such that every row, after the first, has the elements of the previous one shifted cyclically one place left (see, e.g., [46]). Given a vector b = ( b 0 b 1 b n 1 ) T R n , set
rcirc ( b ) = B = b 0 b 1 b 2 b n 2 b n 1 b 1 b 2 b 3 b n 1 b 0 b 2 b 3 b 4 b 0 b 1 b n 2 b n 1 b 0 b n 4 b n 3 b n 1 b 0 b 1 b n 3 b n 2 ,
with B L n , 1 .
Observe that every matrix B B n , 1 is symmetric, and the set L n , 1 is a linear space over R , but not an algebra. Note that, if B 1 , B 2 L n , 1 , then B 1 B 2 , B 2 B 1 L n , n 1 (see Theorem 5.1.2 in [46]). In Appendix A, we prove that
B n = J n , 1 .
Proposition 2.
Let B = rcirc ( b ) B n . Then, the eigenvalues λ j ( B ) of B, can be expressed as
λ j ( B ) = b T u ( j ) , j = 0 , 1 , , n 2 ,
where the u ( j ) ’s are as in(7). Moreover, for j = 1 , 2 , n 1 2 , we obtain
λ n j ( B ) = λ j ( B ) .
Furthermore, it is λ 0 ( B ) = 0 , and λ n / 2 ( B ) = 0 if n is even.
Proof. 
See [45]. □
We note that
F n = { A L n , 1 : there is an asymmetric a R n with A = rcirc ( a ) }
(see also [19]).
Proposition 3.
One has
A n = { A L n , n 1 : there is an asymmetric a R n with A = circ ( a ) } .
Proof. 
See [49]. □
From Proposition 3, it follows that
L n , n 1 = C n A n .
Hence, we obtain
V n = C n B n F n A n = L n , 1 L n , n 1 .
At each iteration of NL–SOR, we have to multiply a vector by A ^ T A ^ , where A ^ is the blur matrix. Since A ^ is a Toeplitz block matrix with Toeplitz blocks, each block of the A ^ T A ^ matrix is composed of symmetric Toeplitz matrices added and multiplied together. Since we approximate every symmetric Toeplitz matrix with a matrix belonging to V n , we now observe that V n is closed under the operations of addition and multiplication. Indeed, it is not difficult to see that V n is closed under the operation of sum between matrices. Moreover, L n , 1 is closed under the operation of multiplication, and if V 1 , V 2 L n , 1 , then V 1 V 2 L n , n 1 , if V 1 L n , n 1 and V 2 L n , 1 , then V 1 V 2 L n , 1 (see, e.g., [46]).

4.3. Inversion of β -Matrices

We now analyze the conditions under which a β -matrix admits inverse.
Proposition 4.
The eigenvalues λ j ( F ) of F = rcirc ( f ) F n , are given by
λ j ( F ) = f T v ( j ) , j = 0 , 1 , , n 2 ,
where the v ( j ) ’s are as in(7). Moreover, for j = 1 , 2 , n 1 2 , we obtain
λ n j ( F ) = λ j ( F ) .
Proof. 
See [45]. □
Proposition 5.
The eigenvalues λ j ( A ) of A = circ ( a ) A n , are given by
λ j ( A ) = i a T v ( j ) , j = 0 , 1 , , n 2 ,
where the v ( j ) ’s are as in(7), and for j = 1 , 2 , n 1 2 , we obtain
λ n j ( A ) = λ j ( A ) .
Proof. 
See [45]. □
It is not difficult to see that, given C C n and V V n , the eigenvalues of C V are equal to those of V C and are given by
λ j ( C V ) = λ j ( V C ) = λ j ( C ) λ j ( V ) , j = 0 , 1 , , n 1 .
Now, we present the next lemma.
Lemma 1.
The following properties hold.
(i) 
Let B B n , B = rcirc ( b ) , and F F n , F = rcirc ( f ) . Then, B F A n , and the eigenvalues of B F are expressed by
λ j ( B F ) = i λ j ( B ) λ j ( F ) , j = 0 , 1 , , n 1 2 ; λ n j ( B F ) = λ j ( B F ) , j = 1 , 2 , n 1 2 .
(ii) 
Let B B n , B = rcirc ( b ) , and F F n , F = rcirc ( f ) . Then, F B A n , and the eigenvalues of F B are expressed by
λ j ( F B ) = i λ j ( B ) λ j ( F ) , j = 0 , 1 , , n 1 2 ; λ n j ( F B ) = λ j ( F B ) , j = 1 , 2 , n 1 2 .
(iii) 
Let A A n , A = circ ( a ) and B B n , B = rcirc ( b ) . Then, A B F n , and the eigenvalues of A B are expressed by
λ j ( A B ) = i λ j ( A ) λ j ( B ) , j = 0 , 1 , , n 1 2 ; λ n j ( A B ) = λ j ( A B ) , j = 1 , 2 , n 1 2 .
(iv) 
Let B B n , B = rcirc ( b ) , and A A n , A = circ ( a ) . Then, B A F n , and the eigenvalues of B A are given by
λ j ( B A ) = i λ j ( B ) λ j ( A ) , j = 0 , 1 , , n 1 2 ; λ n j ( B A ) = λ j ( B A ) , j = 1 , 2 , n 1 2 .
(v) 
Let A A n , A = circ ( a ) and F F n , F = rcirc ( f ) . Then, A F B n , and the eigenvalues of A F are expressed by
λ j ( A F ) = i λ j ( A ) λ j ( F ) , j = 0 , 1 , , n 1 2 ; λ n j ( A F ) = λ j ( A F ) , j = 1 , 2 , n 1 2 .
(vi) 
Let A A n , A = circ ( a ) and F F n , F = rcirc ( f ) . Then, F A B n , and the eigenvalues of F A are given by
λ j ( F A ) = i λ j ( F ) λ j ( A ) , j = 0 , 1 , , n 1 2 ; λ n j ( F A ) = λ j ( F A ) , j = 1 , 2 , n 1 2 .
Proof. 
See [45]. □
Note that, given A A n and B B n , we have that λ j ( A B ) = λ j ( B A ) , hence, A B = B A ; if A A n and F F n , then λ j ( A F ) = λ j ( F A ) , so, A F = F A . Moreover, observe that, if B 1 , B 2 B n , F 1 , F 2 F n , A 1 , A 2 A n , then B 1 B 2 , F 1 F 2 , A 1 A 2 C n .
Now, we see when a β -matrix is invertible by another β -matrix.
Theorem 4.
Given V 1 V n , V 1 = C 1 + B 1 + F 1 + A 1 , with C 1 C n , B 1 B n , F 1 F n , A 1 A n , set σ j ( A 1 ) = i λ j ( A 1 ) , j = 0 , 1 , , n 1 2 . If the matrices
Θ j = λ j ( C 1 ) λ j ( B 1 ) λ j ( F 1 ) σ j ( A 1 ) λ j ( B 1 ) λ j ( C 1 ) σ j ( A 1 ) λ j ( F 1 ) λ j ( F 1 ) σ j ( A 1 ) λ j ( C 1 ) λ j ( B 1 ) σ j ( A 1 ) λ j ( F 1 ) λ j ( B 1 ) λ j ( C 1 ) R 4 × 4 ,
j = 0 , 1 , , n 1 2 , are invertible, then there exists V 2 V n such that V 1 V 2 = I n .
Proof. 
First of all, note that if V 2 V n , then V 2 = C 2 + B 2 + F 2 + A 2 , with C 2 C n , B 2 B n , F 2 F n , A 2 A n . Observe that, by Lemma 1, V 1 V 2 = C 3 + B 3 + F 3 + A 3 , where
C 3 = C 1 C 2 + B 1 B 2 + F 1 F 2 + A 1 A 2 C n , B 3 = C 1 B 2 + B 1 C 2 + F 1 A 2 + A 1 F 2 B n , F 3 = C 1 F 2 + F 1 C 2 + B 1 A 2 + A 1 B 2 F n , A 3 = C 1 A 2 + A 1 C 2 + B 1 F 2 + F 1 B 2 A n .
By imposing C 3 = I n , we obtain
λ j ( C 1 ) λ j ( C 2 ) + λ j ( B 1 ) λ j ( B 2 ) + λ j ( F 1 ) λ j ( F 2 ) + λ j ( A 1 ) λ j ( A 2 ) = 1
for j = 0 , 1 , , n 1 2 .
Moreover, by imposing B 3 = O n , by virtue of Lemma 1 ( v ) and ( v i ) , it follows that
λ j ( B 1 ) λ j ( C 2 ) + λ j ( C 1 ) λ j ( B 2 ) i λ j ( A 1 ) λ j ( F 2 ) + i λ j ( F 1 ) λ j ( A 2 ) = 0
for j = 0 , 1 , , n 1 2 .
Furthermore, we impose F 3 = O n . Then, from Lemma 1 ( i i i ) and ( i v ) , it follows that
λ j ( F 1 ) λ j ( C 2 ) + i λ j ( A 1 ) λ j ( B 2 ) + λ j ( C 1 ) λ j ( F 2 ) + i λ j ( B 1 ) λ j ( A 2 ) = 0
for j = 0 , 1 , , n 1 2 .
Finally, by imposing A 3 = O n , from Lemma 1 ( i ) and ( i i ) , we obtain
λ j ( A 1 ) λ j ( C 2 ) i λ j ( F 1 ) λ j ( B 2 ) + i λ j ( B 1 ) λ j ( F 2 ) + λ j ( C 1 ) λ j ( A 2 ) = 0
for j = 0 , 1 , , n 1 2 .
Now, put σ j ( A 2 ) = i λ j ( A 2 ) , j = 0 , 1 , , n 1 2 , ϑ j T = ( λ j ( C 2 ) λ j ( B 2 ) λ j ( F 2 ) σ j ( A 2 ) ) . Since Θ j is invertible, then the system Θ j ϑ j = ( 1 0 0 0 ) T has a unique solution. □

4.4. Approximation of Symmetric Toeplitz Matrices

For each n N , let us consider the following class:
T n = { T n R n × n : t k , j = t | k j | , k , j { 0 , 1 , , n 1 } } .
Observe that the class defined in (13) coincides with the family of all real symmetric Toeplitz matrices.
Now, we consider the following problem: Given T n T n , find
V n ( T n ) = min V V n V T n F ,
where · F denotes the Frobenius norm. It is not difficult to see that, since T n is symmetric, then we can assume that V n ( T n ) is symmetric. Therefore, V n ( T n ) = C n ( T n ) + B n ( T n ) + F n ( T n ) , where C n ( T n ) C n , B n ( T n ) B n , and F n ( T n ) F n .
As regards γ -matrices, we prove the following:
Theorem 5.
Let G ^ n = S n + H n , 1 . Given T n T n , one has
G n ( T n ) = C n ( T n ) + B n ( T n ) = min G G ^ n G T n F = min G G n G T n F ,
where C n ( T n ) = circ ( c ) , with
c j = ( n j ) t j + j t n j n , j { 1 , 2 , , n 1 } ;
c 0 = t 0 ,
and B n ( T n ) = rcirc ( b ) , where: for n even and j { 1 , 2 , , n 1 } { n / 2 } ,
b j = 1 2 n 4 j 2 n n ( t j t n j ) + 4 k = 1 ( j 3 ) / 2 2 k + 1 n ( t 2 k + 1 t n 2 k 1 ) + 4 k = 1 ( n j 3 ) / 2 2 k + 1 n ( t 2 k + 1 t n 2 k 1 ) ,
j odd;
b j = 1 2 n 4 j 2 n n ( t j t n j ) + 4 k = 1 j / 2 1 2 k n ( t 2 k t n 2 k ) + 4 k = 1 ( n j ) / 2 1 2 k n ( t 2 k t n 2 k ) ,
j even; for n even,
b 0 = 2 n k = 1 n / 2 1 2 k n ( t 2 k t n 2 k ) ,
b n / 2 = 4 n k = 1 n / 4 1 2 k n ( t 2 k t n 2 k ) ;
for n odd and j { 1 , 2 , , n 1 } ,
b j = 1 2 n 4 j 2 n n ( t j t n j ) + 4 k = 0 ( j 3 ) / 2 2 k + 1 n ( t 2 k + 1 t n 2 k 1 ) + 4 k = 1 ( n j ) / 2 1 2 k n ( t 2 k t n 2 k ) ,
j odd;
b j = 1 2 n 4 j 2 n n ( t j t n j ) + 4 k = 1 j / 2 1 2 k n ( t 2 k t n 2 k ) + 4 k = 0 ( n j 3 ) / 2 2 k + 1 n ( t 2 k + 1 t n 2 k 1 ) ,
j even; for n odd,
b 0 = 2 n k = 0 ( n 3 ) / 2 2 k + 1 n ( t 2 k + 1 t n 2 k 1 ) .
Proof. 
Let us define
ϕ ( c , b ) = T n circ ( c ) circ ( b ) F 2
for any two symmetric vectors c , b R n . The proof is achieved by calculating the partial derivatives of the function ϕ , for details, see [45]. □
We prove an analogous result for generic β -matrices.
Theorem 6.
Given T n T n , one has
V n ( T n ) = C n ( T n ) + B n ( T n ) + F n ( T n ) = min V V n V T n F ,
where C n ( T n ) and B n ( T n ) are the same as those given in Theorem 5, and F n ( T n ) = rcirc ( f ) , where:
f j = t j t n j n , j { 1 , 2 , , n 1 } ;
f 0 = 0 .
Proof. 
Set
ϕ ˜ ( c , b , f ) = T n circ ( c ) rcirc ( b ) rcirc ( f ) F 2
for each symmetric vector c R n , b R n and for every asymmetric vector f R n . The proof is achieved by calculating the partial derivatives of the function ϕ , for details, see [49]. □
Note that
C n ( T n ) = min C C n C T n F , H n ( T n ) = C n ( T n ) + F n ( T n ) = min H H n H T n F ,
where C n ( T n ) is the same as the one given in Theorem 5 (see [50]), and F n ( T n ) is the one given in Theorem 6 (see [19]).
Now, we show how the approximations found by β -matrices or γ -matrices allows to obtain preconditioned linear symmetric Toeplitz systems with eigenvalues clustered around 1. For every n N , set
T ^ n = { t T n : there is a function f ( z ) = j = + t j z j , with z C , | z | = 1 , and such that j = + | t j | < + } .
Observe that any function f defined by a power series as in the first line of (17) is real-valued, and the set of such functions satisfying the condition j = + | t j | < + is called Wiener class (see, e.g., [19]). Given a function f belonging to the Wiener class and a matrix T n ( f ) = ( t k , j ) k , j T n ^ such that t k , j = t | k j | , k , j { 0 , 1 , , n 1 } , and f ( z ) = j = + t j z j , then we say that T n ( f ) is generated by f.
Theorem 7.
For n N , given T n ( f ) T ^ n , let C n ( f ) = C n ( T n ( f ) ) , B n ( f ) = B n ( T n ( f ) ) , F n ( f ) = F n ( T n ( f ) ) be as in Theorems 5 and 6, and set V n ( f ) = C n ( f ) + B n ( f ) + F n ( f ) , and G n ( f ) = C n ( f ) + B n ( f ) . Then, the following statements hold.
(i) 
For every ε > 0 , there is a positive integer n 0 , such that for each n n 0 and for every eigenvalue λ j ( V n ( f ) ) of V n ( f ) , it is
λ j ( V n ( f ) ) [ f min ε , f max + ε ] , j { 0 , 1 , , n 1 } ,
where f min and f max denote the minimum and the maximum value of f, respectively.
(ii) 
For every ε > 0 , there is a positive integer n 0 , such that for each n n 0 and for every eigenvalue λ j ( G n ( f ) ) of G n ( f ) , it is
λ j ( G n ( f ) ) [ f min ε , f max + ε ] , j { 0 , 1 , , n 1 } ,
where f min and f max denote the minimum and the maximum value of f, respectively.
(iii) 
If V n ( f ) is invertible, then for every ε > 0 , there are k, n 1 N such that for each n n 1 , the number of eigenvalues λ j ( ( V n ( f ) ) 1 T n ( f ) ) of V n 1 ( f ) T n ( f ) such that | λ j ( ( V n ( f ) ) 1 T n ( f ) ) 1 | > ε is less than k, namely, the spectrum of ( V n ( f ) ) 1 T n ( f ) is clustered around 1.
(iv) 
If G n ( f ) is invertible, then for every ε > 0 there are k, n 1 N such that for each n n 1 the number of eigenvalues λ j ( ( G n ( f ) ) 1 T n ( f ) ) of G n 1 ( f ) T n ( f ) such that | λ j ( ( G n ( f ) ) 1 T n ( f ) ) 1 | > ε is less than k, namely, the spectrum of ( G n ( f ) ) 1 T n ( f ) is clustered around 1.
Proof. 
For ( i ) and ( i i i ) , see [49]; for ( i i ) and ( i v ) , see [45]. □
This result confirms how both β -matrices and γ -matrices can approximate symmetrical Toeplitz matrices well.

4.5. Choice of the Blur Matrix Approximation

In order to test the goodness of the proposed approximations, we have proceeded as follows: fixing the dimension n and the range of values which the entries of a considered Toeplitz matrices can assume, we created 10,000 different instances of Toeplitz symmetric matrices T n , whose entries were randomly and uniformly chosen in the prefixed range. Then, we computed G n ( T n ) , V n ( T n ) , C n ( T n ) , and H n ( T n ) , given in (14)–(16). Then, we computed the mean of the Frobenius norm of the difference between the matrices T n and the approximating matrices. The considered range in Table 1 is [ 0 , 1 ] . Note that the approximations given via β -matrices are always the best since the class V n contains the other three classes considered. We focus on figuring out which class of matrices gives results most similar to those obtained via β -matrices. In this case, G n ( T n ) is the second-best approximation in the mean.
In Table 2, the considered interval is [ 1 , 1 ] , and the obtained results are analogous to the previous ones. In Table 3, we have generated the first row of the Toeplitz symmetric matrix as follows. We set the value of the first entry equal to 1. To determine the value of the i-th entry, we multiplied the value of the i 1 -th entry by a random constant chosen uniformly in [ 0.9 , 1 ] . Such a choice allows us to simulate better the Toeplitz matrices present in the blur operators that, in many cases, have a Gaussian shape. The behavior of the errors is similar to that of the previous cases. Moreover, it is possible to see in Table 1, Table 2 and Table 3 that, for large numbers, the C n ( T n ) and H n ( T n ) approximations give similar results, and that the G n ( T n ) and V n ( T n ) approximations are similar too.
Furthermore, as seen in Table 4, for large numbers, the G n ( T n ) approximations are always better than the H n ( T n ) approximations. Since the multiplication of V n ( T n ) by a vector needs more fast discrete transforms than the multiplication of G n ( T n ) by a vector, we deduce that, for n very large, G n ( T n ) is the best choice considering both the quality of the approximation and the computational cost.
Thus, we define the following approximation of the energy function E in (1):
E ( 2 + h ) ( x ) = y A ˜ x 2 + c C ψ ( 2 ) ( D c ( x ) , D c 1 ( x ) ) ,
where h 0 is the update step of the CATILED algorithm, and A ˜ is the approximation of the blur matrix A ^ , where all the symmetric Toeplitz matrices in the blocks of matrix A ^ are approximated using the γ -matrices given by Theorem 5. In the proposed GNC algorithm, the parameter p varies from 2 + h to 0 with step h. We call such an algorithm E–CATILED (Extended Convex Approximation Technique for Interacting Line Elements Deblurring).

5. Experimental Results

In this section, we show, by some experimental results, how the E–CATILED algorithm achieves similar quantitative and qualitative results to CATILED, given in [5], in reduced computational time. We test the algorithms by implementing them in C language and running them in a Linux Ubuntu environment on a computer with an i5-9400F processor at 2.90 GHz. We consider both synthetic and actual data. To obtain the synthetic data, we apply to a test image a blur operator with Gaussian shape PSF (Point Spread Function) of standard deviation σ ˜ , and sometimes, we add an uncorrelated Gaussian noise of zero mean and variance σ ^ 2 . We use the fast transforms proposed in [38] to multiply between gamma-matrices and vectors. These transforms are explicitly designed to deal with gamma-matrices and have a small number of multiplicative operations. In the examples below, we empirically choose the involved free parameters λ ^ , α ^ , and ε ^ . On the other hand, in the literature, there are available algorithms for estimating the values for the free parameters (cf. [40]).
In our first experiment, we use the ideal synthetic test image in Figure 4a. We blur this image with a Gaussian shape PSF of standard deviation σ ˜ = 1.5 , and Figure 4b presents the blurred image. Figure 4c shows the reconstruction obtained with a standard non-edge-preserving Tikhonov regularization technique (cf. [51]), where the regularization parameter λ ^ is fixed at 1. Whereas, Figure 4d shows the reconstruction obtained again by Tikhonov regularization but with λ ^ = 0.05 . Figure 4e,f presents the results obtained with the CATILED and E–CATILED algorithms where λ ^ = 1 , α ^ = 5 , and ε ^ = 5 . It is possible to see both quantitatively, by the MSE (Mean Squared Error) from the ideal image, and qualitatively that the images obtained with CATILED and E–CATILED are equivalent and better than those obtained with a Tikhonov regularization. In fact, by an implicit use of line elements, it is possible to obtain a more accurate reconstruction of the edges of the objects present in the ideal image. However, the computational time for determining the solution in the case of Tikhonov regularization is about one-sixth of the time of the CATILED technique. Moreover, it is possible to obtain more accurate results by minimizing the energy function in (1) via a stochastic algorithm such as simulated annealing, but with significantly longer computation times (cf. [52]).
In our next experiments, we consider a Gaussian shape PSF of standard deviation σ ˜ = 3.25 , and we apply the corresponding blurring operator to the two test images in Figure 5 to obtain the starting data. In Figure 6, we present the reconstructions obtained by CATILED and E–CATILED of the image in Figure 5a. Instead, the restorations obtained by the two algorithms of the image in Figure 5b are shown in Figure 7. In this case, we set λ ^ = 0.05 , α ^ = 1 , and ε ^ = 1 . Figure 6d shows the reconstruction of the image in Figure 5a by a Tikhonov regularization with λ ^ = 0.05 . Again, one can immediately see that the implicit use of line elements improves qualitatively and quantitatively the quality of the reconstructions.
In the third set of experiments, we consider a Gaussian shape PSF of standard deviation σ ˜ = 10.25 , and we add to the blurred data a Gaussian noise of variance σ ^ 2 = 4 . We present the reconstructions obtained by CATILED and E–CATILED for the two test images in Figure 8 and Figure 9. Here, we pose λ ^ = 0.01 , α ^ = 0.1 , and ε ^ = 0.1 .
In Table 5, we report the errors, in terms of MSE, of the reconstructions obtained by CATILED and E–CATILED. Thus, these experiments show that the results of the two algorithms are equivalent in both quantitative and visual terms.
Let us now consider the real data presented in Figure 10a. Such an image is an RGB color image. A color image version of CATILED is presented in [53]. However, in this case, in the blurred image, there does not appear to be any loss of saturation of the original colors. Thus, we can reconstruct each color component separately. We first assume each channel has a PFS with standard deviation σ ˜ = 5 . The reconstructions obtained by CATILED and E–CATILED are in Figure 10b,c, respectively, and the MSE between the two reconstructions is equal to 0.1320. Then, we consider a PFS with σ ˜ = 7 , and the relative results are in Figure 10d,e. Here, the MSE between the two reconstructions is 0.2615. We set for both cases λ ^ = 0.05 , α ^ = 1 , and ε ^ = 1 . Again, the two algorithms yield qualitatively similar results.
Finally, in Table 6, we report the ratios between the calculation times of E–CATILED and CATILED. Note here that the average computation time of the CATILED algorithm was about 96.35 min. The average computation time of the E–CATILED algorithm can be easily derived from the ratios given in Table 6. Thus, in our experimental results, using E–CATILED, we have an average computational cost gain of 22.01 % . It is thus evident that the use of E–CATILED is more cost-effective than CATILED in terms of computational time by not affecting the quality of the reconstruction obtained.

6. Conclusions and Future Developments

In this paper, we were concerned about decreasing the computational cost of a GNC Algorithm for deblurring images when the blurring matrix is a full symmetric Toeplitz block matrix with Toeplitz blocks. We analyzed the class of γ -matrices, which are matrices for which fast transforms can perform multiplications with vectors. We showed, theoretically and experimentally, how, using γ -matrices, it is possible to obtain good approximations of symmetric Toeplitz matrices. Thus, we proposed to add a minimization of a new approximation of the energy function to the GNC technique. In that approximation, we replaced the Toeplitz matrices present in the blocks of the blur operator with γ -matrices. The experimental results show that the proposed new GNC algorithm reduces the computation time by a fifth compared with its previous version, while not changing the quality of the reconstructions. This technique could be extended in the future by considering γ -block matrices with γ -blocks and expanding the class of approximating matrices.

Author Contributions

Conceptualization, I.G.; methodology, A.B. and I.G.; formal analysis, A.B., I.G. and V.G.; investigation, A.B., I.G. and V.G.; software, I.G.; writing—original draft preparation, A.B. and I.G.; writing—review and editing, I.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GNCGraduated Non-Convexity
CATILEDConvex Approximation Technique for Interacting Line Elements Deblurring
E–CATILEDExtended Convex Approximation Technique for Interacting Line
Elements Deblurring
PSFPoint Spread Function
MSEMean Squared Error

Appendix A

We prove here that B n = J n , 1 .
Lemma A1.
The following inclusion holds:
B n L n , 1 .
Proof. 
Let B B n , B = ( b k , l ) k . l and Λ ( B ) = diag ( λ 0 ( B )   λ 1 ( B ) λ n 1 ( B ) ) be such that λ j ( B ) = λ ( n j ) mod n ( B ) for every j { 0 , 1 , , n 1 } , and B = Q n Λ ( B ) Q n T . We have
b k , l = j = 0 n 1 q k , j ( n ) λ j ( B ) q l , j ( n ) .
Observe that λ 0 ( B ) = 0 and λ n / 2 ( B ) = 0 , if n is even. From this and (A1), we obtain
b k , l = j = 1 ( n 1 ) / 2 λ j ( B ) · q k , j ( n ) q l , j ( n ) q k , n j ( n ) q l , n j ( n ) ,
both when n is even and when n is odd. From (5) and (A1), we deduce
b k , l = 2 n j = 1 ( n 1 ) / 2 λ j ( B ) cos 2 π k j n cos 2 π l j n sin 2 π k j n sin 2 π l j n = 2 n j = 1 ( n 1 ) / 2 λ j ( B ) cos 2 π ( k + l ) j n .
Let b = ( b 0 b 1 b n 1 ) T , where
b t = 2 n j = 1 ( n 1 ) / 2 λ j ( B ) · cos 2 π t j n , t { 0 , 1 , , n 1 } .
Thus, B = circ ( b ) , because for each k, l { 0 , 1 , , n 1 } we have b k , l = b ( k l ) mod n . For any k, l { 0 , 1 , , n 1 } it is b k , l = b ( k + l ) mod n . Hence, B n L n , 1 . □
Lemma A2.
One has
B n K n , 1 .
Proof. 
We recall that
K n , 1 = B R n × n : there is a symmetric b = ( b 0 b n 1 ) T R n with b k , j = b ( j + k ) mod n .
By Lemma A1, we obtain B n L n , 1 . Now, we prove the symmetry of b .
Let B B n be such that there exists Λ ( B ) R n × n , Λ ( B ) = diag ( λ 0 ( B )   λ 1 ( B ) λ n 1 ( B ) ) , such that C = Q n Λ ( B ) Q n T and λ j ( B ) = λ ( n j ) mod n ( B ) for all j { 0 , 1 , , n 1 } . By Theorem A1, b k , j = b ( j + k ) mod n . Moreover, by arguing as in Lemma A1, we obtain (A1), and hence
b t = 2 n j = 1 ( n 1 ) / 2 λ j ( B ) · cos 2 π t j n = 2 n j = 1 ( n 1 ) / 2 λ j ( B ) · cos 2 π j 2 π t j n = 2 n j = 1 ( n 1 ) / 2 λ j ( B ) · cos 2 π ( n t ) j n = b n t
for any t { 0 , 1 , , n 1 } . Thus, b is symmetric. □
Now, we present the following:
Theorem A1.
The following result holds:
B n = J n , 1 .
Proof. 
First of all, we recall that
J n , 1 = { B R n × n : there is a symmetric b = ( b 0 b n 1 ) T R n with t = 0 n 1 b t = 0 , t = 0 n 1 ( 1 ) t b t = 0 when n is even , and b k , j = b ( j + k ) mod n } .
We begin with proving that B n J n , 1 .
Let B B n . In Lemma A2 we proved that B K n , 1 , that is, b is symmetric and b k , j = b ( j + k ) mod n .
Now we prove that
t = 0 n 1 b t = 0 .
Since B B n , the vector
u ( 0 ) = 1 1 1 T
is an eigenvector for the eigenvalue λ 0 ( B ) = 0 . Hence, the formula (A2) is a consequence of (12).
Again by (12), we obtain
t = 0 n 1 ( 1 ) t b t = 0 ,
since the vector
u ( n / 2 ) = 1 1 1 1 1 T
is an eigenvector for the eigenvalue λ n / 2 ( B ) = 0 if n is even. Thus, B n J n , 1 . Now, observe that J n , 1 is a linear space of dimension ( n 1 ) / 2 . Thus, B n and J n , 1 have the same dimension. Therefore, B n = J n , 1 . □

References

  1. Zhang, W.; Wang, Y.; Li, C. Underwater Image Enhancement by Attenuated Color Channel Correction and Detail Preserved Contrast Enhancement. IEEE J. Ocean. Eng. 2022, 47, 718–735. [Google Scholar] [CrossRef]
  2. Zhuang, P.; Wu, J.; Porikli, F.; Li, C. Underwater Image Enhancement with Hyper-Laplacian Reflectance Priors. IEEE Trans. Image Process. 2022, 31, 5442–5455. [Google Scholar] [CrossRef]
  3. Demoment, G. Image Reconstruction and Restoration: Overview of CommonEstimation Structures and Problems. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 2024–2036. [Google Scholar] [CrossRef]
  4. Boccuto, A.; Gerace, I.; Martinelli, F. Half-Quadratic Image Restoration with a Non-Parallelism Constraint. J. Math. Imaging Vis. 2017, 59, 270–295. [Google Scholar] [CrossRef]
  5. Boccuto, A.; Gerace, I.; Pucci, P. Convex Approximation Technique for Interacting Line Elements Deblurring: A New Approach. J. Math. Imaging Vis. 2012, 44, 168–184. [Google Scholar] [CrossRef]
  6. Geman, D.; Reynolds, G. Constrained restoration and the recovery of discontinuities. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 367–383. [Google Scholar] [CrossRef]
  7. Geman, S.; Geman, D. Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Trans. Pattern Anal. Mach. Intell. 1984, 6, 721–740. [Google Scholar] [CrossRef] [PubMed]
  8. Blake, A.; Zisserman, A. Visual Reconstruction; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
  9. Bedini, L.; Gerace, I.; Pepe, M.; Salerno, E.; Tonazzini, A. Stochastic and Deterministic Algorithms for Image Reconstruction with Implicitly Referred Discontinuities; Internal report n. r/2/85; Istituto di Elaborazione della Informazione, C.N.R.: Pisa, Italy, 1992; p. 37. [Google Scholar]
  10. Blake, A. Comparison of the efficiency of deterministic and stochastic algorithms for visual reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 2–12. [Google Scholar] [CrossRef]
  11. Bedini, L.; Gerace, I.; Tonazzini, A. A Deterministic Algorithm for Reconstruction Images with Interacting Discontinuities. CVGIP Graph. Model. Image Process 1994, 56, 109–123. [Google Scholar] [CrossRef]
  12. Nikolova, M. Markovian Reconstruction Using a GNC Approach. IEEE Trans. Image Process. 1999, 8, 1204–1220. [Google Scholar] [CrossRef]
  13. Evangelopoulos, X.; Brockmeier, A.J.; Mu, T.; Goulermas, J.Y. A Graduated Non-Convexity Relaxation for Large-Scale Seriation. In Proceedings of the 2017 SIAM International Conference on Data Mining, Houston, TX, USA, 27–29 April 2017; pp. 462–470. [Google Scholar]
  14. Hazan, E.; Levy, K.Y.; Shalev-Shwartz, S. On Graduated Optimization for Stochastic Non-Convex Problems. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 20–22 June 2016; Volume 48, pp. 1–9. [Google Scholar]
  15. Liu, Z.-Y.; Qiao, H. GNCCP–Graduated NonConvexity and Concavity Procedure. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1258–1267. [Google Scholar] [CrossRef]
  16. Liu, Z.-Y.; Qiao, H.; Su, J.-H. MAP Inference with MRF by Graduated Non-Convexity and Concavity Procedure. In Proceedings of the Neural Information Processing, ICONIP 2014, Kuching, Malaysia, 3–6 November 2014; Loo, C.K., Yap, K.S., Wong, K.W., Teoh, A., Huang, K., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2014; Volume 8835, pp. 404–412. [Google Scholar]
  17. Smith, T.; Egeland, O. Dynamical Pose Estimation with Graduated Non-Convexity for Outlier Robustness. Model. Identif. Control 2022, 43, 79–89. [Google Scholar] [CrossRef]
  18. Yang, H.; Antonante, P.; Tzoumas, V.; Carlone, L. Graduated Non-Convexity for Robust Spatial Perception: From Non-Minimal Solvers to Global Outlier Rejection. IEEE Robot. Autom. Lett. 2020, 5, 1127–1134. [Google Scholar] [CrossRef]
  19. Bini, D.; Favati, P. On a matrix algebra related to the discrete Hartley transform. SIAM J. Matrix Anal. Appl. 1993, 14, 500–507. [Google Scholar] [CrossRef]
  20. Evans, D.J.; Okolie, S.O. The numerical solution of an elliptic P.D.E. with periodic boundary conditions in a rectangular region by the spectral resolution method. J. Comput. Appl. Math. 1982, 8, 238–241. [Google Scholar] [CrossRef]
  21. Gerace, I.; Pucci, P.; Ceccarelli, N.; Discepoli, M.; Mariani, R. A Preconditioned Finite Element Method for the p-Laplacian Parabolic Equation. Appl. Numer. Anal. Comput. Math. 2004, 1, 155–164. [Google Scholar] [CrossRef]
  22. Gilmour, A.E. Circulant matrix methods for the numerical solution of partial differential equations by FFT convolutions. Appl. Math. Model. 1988, 12, 44–50. [Google Scholar] [CrossRef]
  23. Gyori, I.; Horváth, L. Utilization of Circulant Matrix Theory in Periodic Autonomous Difference Equations. Int. J. Differ. Equ. 2014, 9, 163–185. [Google Scholar]
  24. Gyori, I.; Horváth, L. Existence of periodic solutions in a linear higher-order system of difference equations. Comput. Math. Appl. 2013, 66, 2239–2250. [Google Scholar] [CrossRef]
  25. Carrasquinha, E.; Amado, C.; Pires, A.M.; Oliveira, L. Image reconstruction based on circulant matrices. Signal Process. Image Commun. 2018, 63, 72–80. [Google Scholar] [CrossRef]
  26. Henriques, J.F. Circulant Structures in Computer Vision. Ph.D. Thesis, Department of Electrical and Computer Engineering, Faculty of Science and Technology, Coimbra, Portugal, 2015. [Google Scholar]
  27. Codenotti, B.; Gerace, I.; Vigna, S. Hardness results and spectral techniques for combinatorial problems on circulant graphs. Linear Algebra Appl. 1998, 285, 123–142. [Google Scholar] [CrossRef]
  28. Discepoli, M.; Gerace, I.; Mariani, R.; Remigi, A. A Spectral Technique to Solve the Chromatic Number Problem in Circulant Graphs. In Proceedings of the Computational Science and Its Applications—International Conference on Computational Science and Its Applications 2004, Assisi, Italy, 14–17 May 2004; Lecture Notes in Computer Sciences. Springer: Cham, Switzerland, 2004; Volume 3045, pp. 745–754. [Google Scholar]
  29. Gerace, I.; Greco, F. The Travelling Salesman Problem in symmetric circulant matrices with two stripes. Math. Struct. Comput. Sci. 2008, 18, 165–175. [Google Scholar] [CrossRef]
  30. Greco, F.; Gerace, I. The Traveling Salesman Problem in Circulant Weighted Graphs with Two Stripes. Electron. Notes Theor. Comput. Sci. 2007, 169, 99–109. [Google Scholar] [CrossRef]
  31. Gutekunst, S.C.; Williamson, D.P. Characterizing the Integrality Gap of the Subtour LP for the Circulant Traveling Salesman Problem. SIAM J. Discrete Math. 2019, 33, 2452–2478. [Google Scholar] [CrossRef]
  32. Gutekunst, S.C.; Jin, B.; Williamson, D.P. The Two-Stripe Symmetric Circulant TSP is in P. In Proceedings of the Integer Programming and Combinatorial Optimization, IPCO 2022, Eindhoven, The Netherlands, 27–29 June 2022; Aardal, K., Sanità, L., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2022; Volume 13265, pp. 319–332. [Google Scholar]
  33. Andrecut, M. Applications of left circulant matrices in signal and image processing. Mod. Phys. Lett. B 2008, 22, 231–241. [Google Scholar] [CrossRef]
  34. Badeau, R.; Boyer, R. Fast multilinear singular value decomposition for structured tensors. SIAM J. Matrix Anal. Appl. 2008, 30, 1008–1021. [Google Scholar] [CrossRef]
  35. Ding, W.; Qi, L.; Wei, Y. Fast Hankel tensor-vector product and applications to exponential data fitting. Numer. Linear Algebra Appl. 2015, 22, 814–832. [Google Scholar] [CrossRef]
  36. Papy, J.M.; De Lauauer, L.; Van Huffel, S. Exponential data fitting using multilinear algebra: The single-channel and the multi-channel case. Numer. Linear Algebra Appl. 2005, 12, 809–826. [Google Scholar] [CrossRef]
  37. Qi, L. Hankel tensors: Associated Hankel matrices and Vandermonde decomposition. Commun. Math. Sci. 2015, 13, 113–125. [Google Scholar] [CrossRef]
  38. Boccuto, A.; Gerace, I.; Giorgetti, V. A Fast Discrete Transform for a Class of Simultaneously Diagonalizable Matrices. In Proceedings of the 22nd International Conference on Computational Science and Its Applications—ICCSA 2022, Malaga, Spain, 4–7 July 2022; Gervasi, O., Murgante, B., Hendrix, E.M.T., Taniar, D., Apduhan, B.O., Eds.; Lecture Notes in Computer Science. Springer: Cham, Switzerland, 2022; Volume 13375, pp. 214–231. [Google Scholar]
  39. Dell’Acqua, P.; Donatelli, M.; Estatico, C.; Mazza, M. Structure Preserving Preconditioners for Image Deblurring. J. Sci. Comput. 2017, 72, 147–171. [Google Scholar] [CrossRef]
  40. Gerace, I.; Martinelli, F. On Regularization Parameters Estimation in Edge-Preserving Image Reconstruction. In Proceedings of the Computational Science and Its Applications—ICCSA 2008, Perugia, Italy, 30 June–3 July 2008; Gervasi, O., Murgante, B., Laganà, A., Taniar, D., Mun, Y., Gavrilova, M.L., Eds.; Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2008; Volume 5073, pp. 1170–1183. [Google Scholar]
  41. Boccuto, A.; Gerace, I. Image reconstruction with a non-parallelism constraint. In Proceedings of the International Workshop on Computational Intelligence for Multimedia Understanding, Reggio Calabria, Italy, 27–28 October 2016; IEEE Conference Publications: Piscataway Township, NJ, USA, 2016; pp. 1–5. [Google Scholar]
  42. Nikolova, M.; Ng, M.K.; Tam, C.-P. On 1 Data Fitting and Concave Regularization for Image Recovery. SIAM J. Sci. Comput. 2013, 35, A397–A430. [Google Scholar] [CrossRef]
  43. Nikolova, M.; Ng, M.K.; Zhang, S.; Ching, W.-K. Efficient Reconstruction of Piecewise Constant Images Using Nonsmooth Nonconvex Minimization. SIAM J. Imaging Sci. 2008, 1, 2–25. [Google Scholar] [CrossRef]
  44. Lei, Y.J.; Xu, W.R.; Lu, Y.; Niu, Y.R.; Gu, X.M. On the symmetric doubly stochastic inverse eigenvalue problem. Linear Algebra Appl. 2014, 445, 181–205. [Google Scholar] [CrossRef]
  45. Boccuto, A.; Gerace, I.; Giorgetti, V.; Greco, F. Gamma-matrices: A new class of simultaneously diagonalizable matrices. arXiv 2021. Available online: https://arxiv.org/abs/2107.05890 (accessed on 28 March 2023).
  46. Davis, P.J. Circulant Matrices; John Wiley & Sons: New York, NY, USA, 1979. [Google Scholar]
  47. Bose, A.; Saha, K. Random Circulant Matrices; CRC Press, Taylor & Francis Group: Boca Raton, FL, USA; London, UK; New York, NY, USA, 2019. [Google Scholar]
  48. Tee, G.J. Eigenvectors of block circulant and alternating circulant matrices. N. Z. J. Math. 2007, 36, 195–211. [Google Scholar]
  49. Boccuto, A.; Gerace, I.; Giorgetti, V. Image Deblurring: A Class of Matrices Approximating Toeplitz Matrices. viXra 2022. Available online: https://rxiv.org/abs/2201.0155 (accessed on 28 March 2023).
  50. Chan, R.H.; Strang, G. Toeplitz equations by conjugate gradients with circulant preconditioner. SIAM J. Sci. Stat. Comput. 1989, 10, 104–119. [Google Scholar] [CrossRef]
  51. Bouhamidi, A.; Jbilou, K. Sylvester Tikhonov-regularization methods in image restoration. J. Comput. Appl. Math. 2007, 206, 86–98. [Google Scholar] [CrossRef]
  52. Bedini, L.; Gerace, I.; Tonazzini, A.; Gualtieri, P. Edge-preserving restoration in 2-D fluorescence microscopy. Micron 1996, 27, 431–447. [Google Scholar] [CrossRef]
  53. Gerace, I.; Pandolfi, R. A color image restoration with adjacent parallel lines inhibition. In Proceedings of the 12th International Conference on Image Analysis and Processing, Mantova, Italy, 17–19 September 2003; p. 6. [Google Scholar]
Figure 1. Point spread function of the Hubble space telescope camera.
Figure 1. Point spread function of the Hubble space telescope camera.
Applsci 13 05861 g001
Figure 2. The blurred image is in (a); the image reconstructed without (respectively, with) non-parallelism constraint is given in (b) (respectively, (d)), with line elements drawn in (c) (resp., (e)).
Figure 2. The blurred image is in (a); the image reconstructed without (respectively, with) non-parallelism constraint is given in (b) (respectively, (d)), with line elements drawn in (c) (resp., (e)).
Applsci 13 05861 g002
Figure 3. (a) ψ ( 2 ) ; (b) ψ ( 1 ) ; (c) ψ ( 0 ) ψ .
Figure 3. (a) ψ ( 2 ) ; (b) ψ ( 1 ) ; (c) ψ ( 0 ) ψ .
Applsci 13 05861 g003
Figure 4. (a) Ideal image; (b) Blurred data; (c) Tikhonov reconstruction with λ ^ = 1 (MSE = 106.2716); (d) Tikhonov reconstruction with λ ^ = 0.05 (MSE = 50.9508); (e) CATILED reconstruction (MSE = 18.6743); (f) E–CATILED reconstruction (MSE = 18.6591).
Figure 4. (a) Ideal image; (b) Blurred data; (c) Tikhonov reconstruction with λ ^ = 1 (MSE = 106.2716); (d) Tikhonov reconstruction with λ ^ = 0.05 (MSE = 50.9508); (e) CATILED reconstruction (MSE = 18.6743); (f) E–CATILED reconstruction (MSE = 18.6591).
Applsci 13 05861 g004
Figure 5. (a) First ideal image; (b) second ideal image.
Figure 5. (a) First ideal image; (b) second ideal image.
Applsci 13 05861 g005
Figure 6. (a) Blurred data; (b) CATILED reconstruction (MSE = 31.4166); (c) E–CATILED reconstruction (MSE = 31.3254); (d) Tikhonov reconstruction (MSE = 49.0748).
Figure 6. (a) Blurred data; (b) CATILED reconstruction (MSE = 31.4166); (c) E–CATILED reconstruction (MSE = 31.3254); (d) Tikhonov reconstruction (MSE = 49.0748).
Applsci 13 05861 g006
Figure 7. (a) Blurred data; (b) CATILED reconstruction (MSE = 79.5116); (c) E–CATILED reconstruction (MSE = 79.6245).
Figure 7. (a) Blurred data; (b) CATILED reconstruction (MSE = 79.5116); (c) E–CATILED reconstruction (MSE = 79.6245).
Applsci 13 05861 g007
Figure 8. (a) Blurred data; (b) CATILED reconstruction (MSE = 76.7959); (c) E–CATILED reconstruction (MSE = 76.7008).
Figure 8. (a) Blurred data; (b) CATILED reconstruction (MSE = 76.7959); (c) E–CATILED reconstruction (MSE = 76.7008).
Applsci 13 05861 g008
Figure 9. (a) Blurred data; (b) CATILED reconstruction (MSE = 120.4875); (c) E–CATILED reconstruction (MSE = 120.4984).
Figure 9. (a) Blurred data; (b) CATILED reconstruction (MSE = 120.4875); (c) E–CATILED reconstruction (MSE = 120.4984).
Applsci 13 05861 g009
Figure 10. (a) Blurred data; (b) CATILED reconstruction considering σ ˜ = 5 ; (c) E–CATILED reconstruction considering σ ˜ = 5 ; (d) CATILED reconstruction considering σ ˜ = 7 ; (e) E–CATILED reconstruction considering σ ˜ = 7 .
Figure 10. (a) Blurred data; (b) CATILED reconstruction considering σ ˜ = 5 ; (c) E–CATILED reconstruction considering σ ˜ = 5 ; (d) CATILED reconstruction considering σ ˜ = 7 ; (e) E–CATILED reconstruction considering σ ˜ = 7 .
Applsci 13 05861 g010
Table 1. Mean error obtained by the various approximations with respect to 10,000 instances of randomly generated Toeplitz matrices T n with entries in [ 0 , 1 ] .
Table 1. Mean error obtained by the various approximations with respect to 10,000 instances of randomly generated Toeplitz matrices T n with entries in [ 0 , 1 ] .
T n C n ( T n ) F T n H n ( T n ) F T n G n ( T n ) F T n V n ( T n ) F
n = 20 3.13893.11563.07703.0532
n = 25 4.10764.08853.95913.9392
n = 30 4.80624.79034.73694.7207
n = 35 5.75285.73905.59895.5847
n = 40 6.45366.44166.38116.3689
n = 45 7.42437.41357.26497.2538
n = 50 8.12118.11148.04718.0373
n = 100 16.4678616.4629316.3893916.38444
n = 1000 166.48101166.48051166.39821166.39771
Table 2. Mean error obtained by the various approximations concerning 10,000 instances of randomly generated Toeplitz matrices T n with entries in [ 1 , 1 ] .
Table 2. Mean error obtained by the various approximations concerning 10,000 instances of randomly generated Toeplitz matrices T n with entries in [ 1 , 1 ] .
T n C n ( T n ) F T n H n ( T n ) F T n G n ( T n ) F T n V n ( T n ) F
n = 20 6.25646.20986.13136.0838
n = 25 8.20168.16337.89827.8584
n = 30 9.61609.58429.47769.4453
n = 35 11.51711.48911.21011.182
n = 40 12.91512.89112.77112.747
n = 45 14.83514.81314.52114.499
n = 50 16.29216.27216.14116.121
n = 100 32.9281932.9183332.7696632.75976
n = 1000 332.72496332.72396332.56154332.56054
Table 3. Mean error obtained by the various approximations concerning 10,000 instances of randomly generated Toeplitz matrices T n with decreasing entries in [ 0 , 1 ] .
Table 3. Mean error obtained by the various approximations concerning 10,000 instances of randomly generated Toeplitz matrices T n with decreasing entries in [ 0 , 1 ] .
T n C n ( T n ) F T n H n ( T n ) F T n G n ( T n ) F T n V n ( T n ) F
n = 20 2.286012.260952.107452.08025
n = 25 3.177883.154822.920532.89542
n = 30 4.072704.051583.736443.71341
n = 35 4.957984.938654.543534.52243
n = 40 5.798775.781095.310375.29105
n = 45 6.591176.574946.033206.01547
n = 50 7.308097.293176.687636.67133
n = 100 11.5669711.5594310.6030810.59485
n = 1000 13.6829313.6822513.4313713.43068
Table 4. Number of times in which the G n ( T n ) approximation gives better results than the H n ( T n ) approximation concerning 10,000 instances of randomly generated Toeplitz matrices T n with decreasing entries in [ 0 , 1 ] .
Table 4. Number of times in which the G n ( T n ) approximation gives better results than the H n ( T n ) approximation concerning 10,000 instances of randomly generated Toeplitz matrices T n with decreasing entries in [ 0 , 1 ] .
range = [ 1 , 1 ] range = [ 0 , 1 ]
Decreasing Case
n = 20 872710,000
n = 25 979410,000
n = 30 976510,000
n = 35 997310,000
n = 40 994310,000
n = 45 999310,000
n = 50 999010,000
n = 100 10,00010,000
n = 1000 10,00010,000
Table 5. Mean squared error of the reconstructions.
Table 5. Mean squared error of the reconstructions.
FigureCATILEDE–CATILED
Figure 518.674318.6591
Figure 631.416631.3254
Figure 779.511679.6245
Figure 876.795976.7008
Figure 9120.4875120.4984
Table 6. Ratios between the time costs of E–CATILED and CATILED.
Table 6. Ratios between the time costs of E–CATILED and CATILED.
FigureFigure 4Figure 7Figure 6Figure 9Figure 8Figure 10b,cFigure 10d,e
Ratio0.73560.81620.78490.78120.75980.79210.7892
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Boccuto, A.; Gerace, I.; Giorgetti, V. A Graduated Non-Convexity Technique for Dealing Large Point Spread Functions. Appl. Sci. 2023, 13, 5861. https://doi.org/10.3390/app13105861

AMA Style

Boccuto A, Gerace I, Giorgetti V. A Graduated Non-Convexity Technique for Dealing Large Point Spread Functions. Applied Sciences. 2023; 13(10):5861. https://doi.org/10.3390/app13105861

Chicago/Turabian Style

Boccuto, Antonio, Ivan Gerace, and Valentina Giorgetti. 2023. "A Graduated Non-Convexity Technique for Dealing Large Point Spread Functions" Applied Sciences 13, no. 10: 5861. https://doi.org/10.3390/app13105861

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop