Next Article in Journal
An Improved Three-Way Clustering Based on Ensemble Strategy
Next Article in Special Issue
Some Generalized Versions of Chevet–Saphar Tensor Norms
Previous Article in Journal
Stochastic Neural Networks-Based Algorithmic Trading for the Cryptocurrency Market
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analytical Solutions to Minimum-Norm Problems

by
Almudena Campos-Jiménez
1,†,
José Antonio Vílchez-Membrilla
2,†,
Clemente Cobos-Sánchez
2,† and
Francisco Javier García-Pacheco
1,*,†
1
Department of Mathematics, College of Engineering, University of Cadiz, 11519 Puerto Real, Spain
2
Department of Electronics, College of Engineering, University of Cadiz, 11510 Puerto Real, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(9), 1454; https://doi.org/10.3390/math10091454
Submission received: 3 April 2022 / Revised: 19 April 2022 / Accepted: 22 April 2022 / Published: 26 April 2022
(This article belongs to the Special Issue Functional Analysis, Topology and Quantum Mechanics II)

Abstract

:
For G R m × n and g R m , the minimization min G ψ g 2 , with ψ R n , is known as the Tykhonov regularization. We transport the Tykhonov regularization to an infinite-dimensional setting, that is min T ( h ) k , where T : H K is a continuous linear operator between Hilbert spaces H , K and h H , k K . In order to avoid an unbounded set of solutions for the Tykhonov regularization, we transform the infinite-dimensional Tykhonov regularization into a multiobjective optimization problem: min T ( h ) k and min h . We call it bounded Tykhonov regularization. A Pareto-optimal solution of the bounded Tykhonov regularization is found. Finally, the bounded Tykhonov regularization is modified to introduce the precise Tykhonov regularization: min T ( h ) k with h = α . The precise Tykhonov regularization is also optimally solved. All of these mathematical solutions are optimal for the design of Magnetic Resonance Imaging (MRI) coils.
MSC:
47L05; 47L90; 49J30; 90B50

1. Introduction

Optimization problems are among the questions shared between Pure and Applied Mathematics, studied in Operator Theory [1,2,3,4], in Differential Geometry [5,6,7,8], in the Geometry of Banach Spaces [9,10], and in all areas of Medical, Social, and Experimental Sciences [11,12,13,14,15].
By means of optimization problems, it is possible to model with precision many real-life situations, even though, in the case of multiobjective optimization problems, the existence of global and optimal solutions (optimizing all the objective functions at once) is not guaranteed. This is why Pareto-optimal solutions were defined and introduced in the literature of Optimization Theory. In an informal way, a Pareto-optimal solution is a feasible solution satisfying that if there exists another feasible solution better optimizing one objective function, then the latter has to be less optimal in another objective function.
As shown in [14,16,17,18,19], the design of optimal MRI coils is modeled by means of a particular case of optimization problems called minimum-norm problems [20], such as
min ψ 2 , G ψ = g , min ψ 2 , G ψ g D , and min G ψ g 2 , ψ R n ,
for G R m × n , ψ R n , g R m , and  D 0 . Notice that the last one of the three above problems is precisely the finite-dimensional Tykhonov regularization. A theoretical treatment in the framework of Operator Theory and Functional Analysis will be given to the above problems, transporting them to an infinite-dimensional setting, as well as a MATLAB encoding for their finite-dimensional version in Appendices A–D.
We also introduce in the literature of Optimization Theory the following multiobjective optimization problem:
min T ( h ) k , min h ,
where T : H K is a continuous linear operator between Hilbert spaces H , K and h H , k K , which we call bounded Tykhonov regularization. We provide a nontrivial Pareto-optimal solution of the bounded Tykhonov regularization. Sometimes, the  bounded Tykhonov regularization might produce a nontrivial Pareto-optimal solution of an excessively small norm. This is why we introduce what we name as the precise Tykhonov regularization:
min T ( h ) k , h = α ,
which we fully and optimally solve.

2. Materials and Methods

In this Methodology Section, we properly define the optimization problems that we will deal with. We will also gather all the necessary concepts, notions, techniques, and results needed to accomplish our analytical solutions for the previously mentioned optimization problems.

2.1. Mathematical Formulation of the Optimization Problems

The optimization problems that we will deal with in this manuscript are described next. As we mentioned before, these three problems arise from the optimal design of MRI coils.
Problem 1.
Let G R m × n and g R m in the range of G. Solve
min ψ 2 , G ψ = g ,
for ψ R n .
Observe that, under the settings of Problem 1, g must be in the range of G, that is there must exist at least one ψ R n for which G ψ = g . Otherwise, the feasible region of Problem 1 is empty. Furthermore, if  g = 0 , then 0 is trivially the unique solution of Problem 1. The infinite-dimensional or abstract version of Problem 1 is displayed next.
Problem 2.
Let X , Y be Banach Spaces, T : X Y a continuous linear operator, and  y T ( X ) . Solve
min x , T ( x ) = y ,
for x X .
Under the settings of Problem 2, x = 0 is the (unique) solution of Problem 2 if and only if y = 0 .
Problem 3.
Let G R m × n , g R m \ { 0 } , and  0 < D < g . Solve
min ψ 2 , G ψ g D ,
for ψ R n .
Notice that, under the settings of Problem 3, g does not necessarily need to be in the range of G. However, if the condition 0 < D < g is not imposed, then 0 g D allows 0 to be the unique solution of Problem 3. The infinite-dimensional or abstract version of Problem 3 is as follows.
Problem 4.
Let X , Y be Banach Spaces, T : X Y a continuous linear operator, y Y \ { 0 } , and  0 < D < y . Solve
min x , T ( x ) y D ,
for x X .
Under the settings of Problem 4, the reason for D to lie in the open interval ( 0 , y ) is again to avoid the trivial solution x = 0 . Indeed, x = 0 is the (unique) solution of Problem 4 if and only if D y . The following technical lemma ensures that any solution of Problem 4 must lie in the boundary of the feasible region. This lemma will be useful later on.
Lemma 1.
Let X , Y be Banach Spaces, T : X Y a continuous linear operator, y Y \ { 0 } , and  0 < D < y . If  x X is a solution of Problem 4, then T ( x ) y = D .
Proof. 
If T ( x ) y < D , then we can find y D y T ( x ) y δ < 1 , which implies that T ( δ x ) y T ( δ x ) δ y + δ y y = δ T ( x ) y + ( 1 δ ) y D . Then, δ x is in the feasible region of Problem 4, reaching the contradiction that δ x = δ x < x δ x .    □
The third problem that we will deal with is the Tykhonov regularization.
Problem 5
(Finite-dimensional Tykhonov regularization). Let G R m × n and g R m . Solve
min G ψ g 2 , ψ R n .
The infinite-dimensional or abstract version of Problem 5 follows next.
Problem 6
(Infinite-dimensional Tykhonov regularization). Let X , Y be Banach Spaces, T : X Y a continuous linear operator, and  y Y . Solve
min T ( x ) y , x X .
Notice that, under the settings of Problem 6, when y = 0 , we obtain the trivial set of solutions given by ker ( T ) .

2.2. Supporting Vectors

If X , Y are Banach Spaces and T : X Y is a continuous linear operator, then the operator norm of T is defined as
T : = sup x 1 T ( x ) .
This norm turns the vector space of continuous linear operators from X to Y, C L ( X , Y ) , into a Banach Space. When X = Y , we will simply denote it by C L ( X ) . When Y = K ( R or ), we will simply denote it by X * .
The concept of the supporting vector was formerly introduced for the first time in [1], although it appeared implicitly and scattered throughout the literature of Banach Space Theory, as for instance, in [3,4,9,10].
Definition 1
(Supporting vector). Let X , Y be Banach Spaces. Let T : X Y be a continuous linear operator. The set of supporting vectors of T is defined as
suppv ( T ) : = { x X : x = 1 and T ( x ) = T } .
We refer the reader to [2,21,22] for a topological and geometrical study of the set of supporting vectors of a continuous linear operator. Supporting vectors have been successfully applied to solve multiobjective optimization problems that typically arise in Bioengineering, Physics, and Statistics [14,15,23,24,25], improving considerably the results obtained by means of other techniques, such as Heuristic methods [16,18,19].
Definition 2
(1-Supporting vector). Let X be a Banach Space. Let f X * be a continuous linear functional. The set of 1-supporting vectors of f is defined as
suppv 1 ( f ) : = { x X : x = 1 and f ( x ) = f } .
The 1-supporting vectors are special cases of supporting vectors, that is suppv 1 ( f ) suppv ( f ) . We will strongly rely on 1-supporting vectors later on. A standard geometrical property of 1-supporting vectors is shown in the next remark.
Remark 1.
Let X be a Banach Space. Let f X * \ { 0 } . If  x , y suppv 1 ( f ) , then t x + ( 1 t ) y suppv 1 ( f ) for all t [ 0 , 1 ] . In other words, suppv 1 ( f ) is a convex subset of the unit sphere S X of X.

2.3. Riesz Representation Theorem on Hilbert Spaces

The Riesz Representation Theorem is one of the most important results in Functional Analysis and is crucial for working with self-adjoint operators on Hilbert spaces.
Theorem 1
(Riesz Representation Theorem). Let H be a Hilbert space. The dual map of H,
J H : H H * h J H ( h ) : = h * = ( | h ) ,
where
J H ( h ) = h * = ( | h ) : H K k J H ( h ) ( k ) = h * ( k ) = ( k | h ) ,
is a surjective linear isometry between H and H * .
In the frame of the Geometry of Banach Spaces, J H is called the duality mapping. In Quantum Mechanics, the dual map J H has a different mention and notation. Under the settings of the Riesz Representation Theorem and by relying on certain techniques of the Geometry of Banach Spaces and on Remark 1, it can be proven that if h H \ { 0 } , then h h is the only 1-supporting vector of h * , that is suppv 1 h * = h h .
Let H be a Hilbert space. For every closed subspace M of H, the orthogonal subspace of M is denoted by M and the orthogonal projection of H onto M is denoted as p M . Notice that H = M 2 M , in other words, for all x H ,
x = p M ( x ) + p M ( x ) and x 2 = p M ( x ) 2 + p M ( x ) 2 .
If H , K are Hilbert spaces and T : H K is a continuous linear operator, then there exists a unique continuous linear operator T * : K H such that ( T ( h ) | k ) = h | T * ( k ) for all h H and all k K . This operator T * is called the adjoint of T. The following technical lemma is well known in the literature of Functional Analysis and Operator Theory, and it will be used later on.
Lemma 2.
Let H , K be Hilbert spaces. Let T : H K be a continuous linear operator. Then, T ( H ) = ker T * and T ( H ) = ker T * .
The finite-dimensional Hilbert spaces, which are involved in the finite-dimensional problems previously mentioned, will be denoted by 2 n : = R n , · 2 , where · 2 clearly denotes the Euclidean norm.

3. Results

This section is aimed at providing analytical solutions for Problems 2 and 4, which will automatically work for Problems 1 and 3, respectively, since these last two problems are particular cases of the first two.

3.1. Analytical Solution of Problems 1 and 2 in the Hilbert Space Context

Problem 2 will be actually tackled, and solved completely, in the Hilbert space context. The reformulation of Problem 2 in the previously mentioned setting follows next.
Problem 7.
Let H , K be Hilbert spaces, T : H K a continuous linear operator, and  k T ( H ) . Solve
min h , T ( h ) = k ,
for h H .
Observe that Problem 1 is still a particular case of Problem 7, which is itself a particular case of Problem 2.
Lemma 3.
Let H be a Hilbert space. Let M be a closed subspace of H. For every x X ,
( x + M ) M = p M ( x ) and min { x + m : m M } = p M ( x ) .
Proof. 
We will show first that ( x + M ) M = p M ( x ) . Indeed, on the one hand, it is clear that p M ( x ) M and, by (9), p M ( x ) = x + p M ( x ) x + M , so ( x + M ) M p M ( x ) . On the other hand, take arbitrary elements m M and m M such that x + m = m . By using again (9),
m = x + m = p M ( x ) + p M ( x ) + m ,
meaning that
m p M ( x ) = p M ( x ) m M M = { 0 } .
As a consequence, m = p M ( x ) , and so, x + m = m = p M ( x ) . This proves the inclusion ( x + M ) M p M ( x ) . Let us finally prove that min { x + m : m M } = p M ( x ) . Fix an arbitrary m M . By virtue of (9), note that
x + m 2 = p M ( x ) + p M ( x ) + m 2 = p M ( x + m ) + p M ( x ) 2 = p M ( x + m ) 2 + p M ( x ) 2 p M ( x ) 2 .
Therefore,
min { x + m : m M } p M ( x ) .
Since p M ( x ) x + M as we have just proven first, we finally conclude that
min { x + m : m M } = p M ( x ) .
   □
Remark 2.
Under the settings of Lemma 3, for every y x + M , it is clear that y + M = x + M and p M ( x ) = p M ( y ) .
The following theorem solves Problem 7 completely.
Theorem 2.
Let H , K be Hilbert spaces. Let T : H K be a continuous linear operator. For every k 0 T ( H ) and every h 0 T 1 ( k 0 ) , we have the following:
1. 
min h : h H , T ( h ) = k 0 = p ker ( T ) ( h 0 ) .
2. 
The above min is attained at p ker ( T ) ( h 0 ) T 1 ( k 0 ) .
3. 
If h 1 T 1 ( k 0 ) , then p ker ( T ) ( h 0 ) = p ker ( T ) ( h 1 ) .
Proof. 
In the first place, observe that
h H : T ( h ) = k 0 = T 1 ( k 0 ) = h 0 + ker ( T ) .
By relying on Lemma 3,
min h : h H , T ( h ) = k 0 = min h 0 + h : h ker ( T ) = p ker ( T ) ( h 0 ) .
Finally, by taking into consideration Lemma 3 together with Remark 2, we see that p ker ( T ) ( h 0 ) T 1 ( k 0 ) and p ker ( T ) ( h 0 ) = p ker ( T ) ( h 1 ) for each h 1 T 1 ( k 0 ) .    □
Now, we are in the right position to provide a full solution to Problem 1.
Corollary 1.
Let G R m × n and g R m be in the range of G. The solution of Problem 1 is given by p ker ( G ) ( ψ 0 ) 2 for any ψ 0 R n such that G ψ 0 = g , and it is attained at p ker ( G ) ( ψ 0 ) .
Proof. 
It only suffices to call on Theorem 2 by taking H : = 2 n , K : = 2 m , T : = G , k 0 : = g , and  ψ 0 : = h 0 .    □
A MATLAB encoding for Corollary 1 is available in Appendix A.

3.2. Analytical Solution of Problem 7 When K : = K

If we take K : = K in Problem 7, then its solution can be also computed in terms of 1-supporting vectors.
Theorem 3.
Let H be a Hilbert space. Let h 0 * H * \ { 0 } . For every λ K ,
min h : h H , h 0 * ( h ) = λ = | λ | h 0 .
Even more, the previous min is attained at λ h 0 h 0 h 0 .
Proof. 
First off, note that, for every h H with h 0 * ( h ) = λ ,
| λ | = | h 0 * ( h ) | h 0 * h = h 0 h ,
meaning that
h | λ | h 0 .
This proves the inequality:
min h : h H , h 0 * ( h ) = λ | λ | h 0 .
In order to prove the reverse inequality, we will make use of λ h 0 2 h 0 . Observe that
h 0 * λ h 0 2 h 0 = λ h 0 * ( h 0 ) h 0 2 = λ h 0 2 h 0 2 = λ .
Therefore,
λ h 0 2 h 0 h H : h 0 * ( h ) = λ .
Next,
λ h 0 2 h 0 = | λ | h 0 2 h 0 = | λ | h 0 .
This finally shows that
min h : h H , h 0 * ( h ) = λ = | λ | h 0
and it is attained at λ h 0 2 h 0 .    □
As an immediate corollary, if we take m = 1 in Problem 1, then its solution can be also computed in terms of 1-supporting vectors.
Corollary 2.
Let G R 1 × n , G 0 , and  g R . The solution of Problem 1 is given by | g | G t , and it is attained at ψ : = g G t G t G t .
Proof. 
It only suffices to call on Theorem 3 by taking H : = 2 n , h 0 * : = G , λ : = g , and h 0 : = G t .    □
A MATLAB encoding for Corollary 2 is available in Appendix B.

3.3. Partial Solution of Problem 3

A particular version of Problem 3 was partially solved in [20] (Corollary 13). Here, we will follow a completely different approach. Before tackling Problem 3, we will first solve particular cases of it.
Problem 8.
Let X be a Banach Space, f X * \ { 0 } , and  0 < c d . Solve
min x , c f ( x ) d .
We will first solve Problem 8 by relying on 1-supporting vectors.
Lemma 4.
Let X be a Banach Space, f X * \ { 0 } , and  0 < c d . The set of optimal solutions of Problem 8 is given by sol ( 11 ) = c x f : x suppv 1 ( f ) . In particular, sol ( 11 ) if and only if suppv 1 ( f ) .
Proof. 
Fix an arbitrary y sol ( 11 ) . We will show that x : = f c y suppv 1 ( f ) . For every z B X \ ker ( f ) , c f ( z ) z is in the feasible region of (11) since f c f ( z ) z = c ; therefore,
y c f ( z ) z = c | f ( z ) | z .
Now, if we take a sequence ( z n ) n N S X \ ker ( f ) such that | f ( z n ) | f , we obtain from (12) that y c f . On the other hand, c f ( y ) f y , that is c f y . As a consequence, y = c f , meaning that x = 1 . Finally, notice that
f f ( x ) = f f c y = f c f ( y ) f c c = f ,
which implies that f ( x ) = f , hence x suppv 1 ( f ) .
Take any x suppv 1 ( f ) . In the first place, f c x f = c , so c x f is in the feasible region of problem (11), that is c x f is a feasible solution. Next, take y as another feasible solution of (11). Then, c f ( y ) f y ; hence,
y c f = c x f .
This shows that c x f sol ( 11 ) ; in other words, c x f is an optimal solution
   □
Notice that the feasible region of Problem 8, c f ( x ) d , can be rewritten as | f ( x ) c + d 2 | d c 2 ; hence, Problem 8 is of the same form as Problem 3 whenever c < d . In the case that c = d , then Problem 8 is a particular case of Problem 1. In fact, Problem 3 can be rewritten as follows.
Problem 9.
Let G R m × n , g R m \ { 0 } , and  0 < D < g . Solve
min ψ 2 G ψ g D min ψ 2 | G 1 ψ g 1 | D | G m ψ g m | D min ψ 2 g 1 D G 1 ψ g 1 + D g m D G m ψ g m + D
for ψ R n , where g = ( g 1 , , g m ) t and G i is the i t h row vector of G for i = 1 , , m .
If, under the settings of Problem 9, we assume that g 1 = = g m > 0 , which is consistent with the design of optimal MRI coils according to [14,16,18,19], then g i D = g D > 0 for all i = 1 , , m . As a consequence, g i D G i ψ g i + D is of the same form as the feasible region of Problem 11. Observe that, under this assumption, G i 0 for all i = 1 , , m . In this situation, the infinite-dimensional generalization of Problem 9 is given as follows.
Problem 10.
Let X be a Banach Space, f i X * \ { 0 } , i = 1 , , m , and  0 < c d . Solve
min x , c f 1 ( x ) d c f m ( x ) d ,
for x X .
Notice that, in the case c < d , Problem 10 is a particular case of Problem 4 since one can define the following continuous linear operator:
T : X R m x T ( x ) : = f 1 ( x ) , , f m ( x ) .
Then, the feasible region of Problem 10 is precisely
{ x X : c f i ( x ) d i = 1 , , m } = x X : T ( x ) g D ,
where g : = c + d 2 , m , c + d 2 and D : = d c 2 . If  c = d , then by using the same operator T given in Equation (15), it can be seen that Problem 10 is a particular case of Problem 2. We will strongly rely on Lemma 4 to approach the optimal solutions of Problem 10.
Theorem 4.
Let X be a Banach Space, f i X * \ { 0 } , i = 1 , , m , and  0 < c d . If there exists i { 1 , , m } and x suppv 1 ( f i ) such that c f i x f k 1 ( [ c , d ] ) for all k { 1 , , m } \ { i } , then c f i x is an optimal solution of Problem 10.
Proof. 
In the first place, notice that c f i x is a feasible solution of Problem 10, that is it belongs to the feasible region simply because
c f k c f i x d
for all k = 1 , , m . Let z X be another feasible solution of Problem 10. In particular, c f i ( z ) d ; therefore, if we consider Problem 8 for f i , we have that c f i x is an optimal solution of such a problem in view of Lemma 4. As a consequence, c f i x z . Then, we claim that c f i x is an optimal solution of Problem 10.    □

3.4. Analytical Solution of Problems 5 and 6 in the Hilbert Space Context

The solution of the finite-dimensional Tykhonov regularization is well known in the literature of Optimization Theory. Here, we present a fine argument to solve the infinite-dimensional Tykhonov regularization in the Hilbert context, whose formulation follows.
Problem 11
(Infinite-dimensional Tykhonov regularization). Let H , K be Hilbert spaces, T : H K a continuous linear operator, and  k K . Solve
min T ( h ) k , h H .
We will rely on the basic techniques of Hilbert spaces and Operator Theory.
Proposition 1.
Let H , K be Hilbert spaces, T : H K a continuous linear operator, and  k K . Then:
1. 
If k T ( H ) , then min { T ( h ) k : h H } = 0 , and it is attained at any element of T 1 ( k ) .
2. 
If T has dense range, then inf { T ( h ) k : h H } = 0 . Hence, Problem 11 has a solution if and only if k T ( H ) .
Proof. 
  • This is a simple and trivial exercise.
  • Suppose that T has a dense range, that is the closure of T ( H ) is K. There exists a sequence ( h n ) n N such that T ( h n ) n N converges to k. This means that inf { T ( h ) k : h H } = 0 . Next, if Problem 11 has a solution h 0 H , then T ( h 0 ) k = min { T ( h ) k : h H } = inf { T ( h ) k : h H } = 0 , which implies that k = T ( h 0 ) T ( H ) . Conversely, suppose that k T ( H ) , that is there exists h 0 H with T ( h 0 ) = k . Then, T ( h 0 ) k = 0 , so it is clear that inf { T ( h ) k : h H } = min { T ( h ) k : h H } = 0 .
   □
The following theorem is a refinement of Lemma 3.
Theorem 5.
Let H be a Hilbert space. Let M be a closed subspace of H. Let h H . Then, min { h m : m M } = p M ( h ) . Even more, the previous min is attained at p M ( h ) .
Proof. 
We can write h = p M ( h ) + p M ( h ) . For every m M ,
h m 2 = p M ( h ) + p M ( h ) m 2 = p M ( h ) m 2 + p M ( h ) 2 p M ( h ) 2 .
This shows that min { h m : m M } p M ( h ) . Finally, notice that p M ( h ) M and h p M ( h ) = p M ( h ) . This shows that min { h m : m M } = p M ( h ) , and the min is attained at p M ( h ) .    □
As an immediate consequence of Theorem 5, we obtain the following corollary, which fully solves Problem 11.
Corollary 3.
Let H , K be Hilbert spaces. Let T : H K be a continuous linear operator of closed range. For every k 0 K , we have the following:
1. 
min T ( h ) k 0 : h H = p T ( H ) ( k 0 ) .
2. 
The above min is attained at any element of T 1 p T ( H ) ( k 0 ) .
3. 
arg min T ( h ) k 0 : h H is bounded if and only if ker ( T ) = { 0 } .
Proof. 
The first two items are a direct application of Theorem 5, so let us simply take care of the third item. Note that arg min T ( h ) k 0 : h H = T 1 p T ( H ) ( k 0 ) = h 0 + ker ( T ) for any h 0 T 1 p T ( H ) ( k 0 ) . As a consequence, arg min T ( h ) k 0 : h H is bounded if and only if ker ( T ) = { 0 } .    □

4. Discussion

We discuss in this section several aspects of the obtained results.

4.1. Bounded Tykhonov Regularization

The bounded Tykhonov regularization is a novel concept conceived of in this manuscript describing a different way to tackle the classical Tykhonov regularization in an original manner that allows designing efficient MRI coils. The bounded Tykhonov regularization is described as the following multiobjective optimization problem.
Problem 12
(Finite-dimensional bounded Tykhonov regularization). Let G R m × n and g R m . Solve
min G ψ g 2 , min ψ 2 ,
for ψ R n .
The infinite-dimensional version of the bounded Tykhonov regularization follows now.
Problem 13
(Infinite-dimensional bounded Tykhonov regularization). Let X , Y be Banach Spaces, T : X Y a continuous linear operator, and  y Y . Solve
min T ( x ) y , min x ,
for x X .
The infinite-dimensional bounded Tykhonov regularization in the Hilbert space context is described next.
Problem 14.
Let H , K be Hilbert spaces, T : H K a continuous linear operator, and  k K . Solve
min T ( h ) k , min h ,
for h H .
We will find a nontrivial Pareto-optimal solution of Problem 14.
Theorem 6.
Let H , K be Hilbert spaces. Let T : H K be a continuous linear operator of closed range. Let k K . Then, p ker ( T ) T 1 p T ( H ) ( k ) is a Pareto-optimal solution of (19).
Proof. 
Bear in mind that Theorem 2(3) ensures that p ker ( T ) T 1 p T ( H ) ( k ) is a singleton; hence, we can take h 0 H with { h 0 } = p ker ( T ) T 1 p T ( H ) ( k ) . In view of Theorem 2(2), p ker ( T ) T 1 p T ( H ) ( k ) T 1 p T ( H ) ( k ) . By applying Corollary 3(2), arg min T ( h ) k : h H = T 1 p T ( H ) ( k ) . This implies that T ( h 0 ) k T ( h ) k for all h H . Suppose on the contrary that h 0 is not a Pareto-optimal solution of (19). There exists h 1 H satisfying one of the following two possibilities:
  • T ( h 1 ) k T ( h 0 ) k and h 1 < h 0 . As a consequence, T ( h 0 ) k = T ( h 1 ) k , so h 1 arg min T ( h ) k : h H = T 1 p T ( H ) ( k ) . Finally, by calling again on Theorem 2(2), arg min h : h T 1 p T ( H ) ( k ) = p ker ( T ) T 1 p T ( H ) ( k ) , so h 0 is the element of the minimum norm of T 1 p T ( H ) ( k ) , reaching the contradiction that h 0 h 1 .
  • T ( h 1 ) k < T ( h 0 ) k and h 1 h 0 . This is impossible because we have already proven that T ( h 0 ) k T ( h 1 ) k .
As a consequence of the two previous contradictions, we deduce that the singleton p ker ( T ) T 1 p T ( H ) ( k ) is a Pareto-optimal solution of (19).    □
Under the settings of Theorem 6 and by taking into consideration Lemma 2, the reader should notice that ker T * = T ( H ) , thus p ker ( T ) T 1 p ker T * ( k ) = p ker ( T ) T 1 p T ( H ) ( k ) . Hence, p ker ( T ) T 1 p ker T * ( k ) is a Pareto-optimal solution of (19). A more intuitive way of understanding Theorem 6 is the following. It is not hard to see, by keeping in mind Theorem 2, that p ker ( T ) T 1 p ker T * ( k ) is the solution of
min h , T ( h ) = p ker T * ( k ) .
What Corollary 3(2) is saying is that the set of constraints of the above problem,
h H : T ( h ) = p ker T * ( k ) ,
is indeed the set of solutions of
min T ( h ) k , h H .
Corollary 4.
Let G R m × n and g R m . Then, p ker ( G ) G 1 p ker G t ( g ) is a Pareto-optimal solution of (17).
Proof. 
It only suffices to call on Theorem 6 by taking H : = 2 n , K : = 2 m , T : = G , and k : = g .    □
A MATLAB encoding for Corollary 4 is available in Appendix C.

4.2. Precise Tykhonov Regularization

The bounded Tykhonov regularization might produce a solution of an excessively small norm. Sometimes, it is precise to obtain a solution of the Tykhonov regularization with a certain predetermined norm. This is what we call the precise Tykhonov regularization.
Problem 15
(Finite-dimensional precise Tykhonov regularization). Let G R m × n with G 0 and ker ( G ) { 0 } , g R m , and  α min ψ 2 : ψ arg min G ψ g 2 . Solve
min G ψ g 2 , ψ 2 = α ,
for ψ R n .
Under the settings of Problem 15, by bearing in mind Corollary 3(3), if  ker ( G ) = { 0 } , then arg min G ψ g 2 is a singleton. This is why it is required that ker ( G ) { 0 } . The infinite-dimensional version of the precise Tykhonov regularization follows now.
Problem 16
(Infinite-dimensional precise Tykhonov regularization). Let X , Y be Banach Spaces, T : X Y a nonzero continuous linear operator with ker ( T ) { 0 } , y Y , and  α min x : x arg min T ( x ) y . Solve
min T ( x ) y , x = α ,
for x X .
The infinite-dimensional precise Tykhonov regularization in the Hilbert space context is described next. Observe that, in accordance with Corollary 3(2),
min h : h arg min T ( h ) k = dist 0 , T 1 p T ( H ) ( k ) .
Problem 17.
Let H , K be Hilbert spaces, T : H K a nonzero continuous linear operator with ker ( T ) { 0 } , k K , and  α dist 0 , T 1 p T ( H ) ( k ) . Solve
min T ( h ) k , h = α ,
for h H .
Notice that Problem 17 is a single-objective optimization problem. We will find an optimal solution of Problem 17. For this, we will make use of several proper technical results from Banach Space Theory and Operator Theory. Recall that, in a vector space, a linear manifold is a translation of a subspace. The dimension of a linear manifold is by definition the dimension of the subspace.
Theorem 7.
Let X be a Banach Space. Let M X be a linear manifold with dim ( M ) 1 . If  α > dist ( 0 , M ) , then there exists m 0 M such that m 0 = α .
Proof. 
On the one hand, since dist ( 0 , M ) : = inf { m : m M } , there exists m 1 M with dist ( 0 , M ) m 1 < α . On the other hand, since dim ( M ) 1 , we have that M is unbounded; thus, there exists m 2 M with m 2 > α . Consider the continuous function
ϕ : [ 0 , 1 ] [ 0 , ) t ϕ ( t ) : = t m 2 + ( 1 t ) m 1 .
Notice that ϕ ( 0 ) = m 1 < α and ϕ ( 1 ) = m 2 > α . As a consequence, Bolzano’s Theorem allows the existence of t 0 ( 0 , 1 ) such that ϕ ( t 0 ) = α . Finally, it only suffices to take m 0 = t 0 m 2 + ( 1 t 0 ) m 1 .    □
Observe that Theorem 7 works with the exact same proof if we replace “linear manifold with dimension ≥1” with “unbounded convex subset”. Theorem 7 can actually be accomplished in the Hilbert space setting with much less effort.
Remark 3.
Let H , K be Hilbert spaces. Let T : H K be a nonzero continuous linear operator such that ker ( T ) { 0 } . Let k K . Denote δ : = dist 0 , T 1 ( k ) = min h : h T 1 ( k ) , and let α δ . Fix an arbitrary h 1 ker ( T ) \ { 0 } . Notice that h 2 : = α 2 δ 2 h 1 h 1 satisfies that h 2 ker ( T ) , and by virtue of Theorem 2, for every h 0 T 1 ( k ) , p ker ( T ) ( h 0 ) + h 2 T 1 ( k ) and
p ker ( T ) ( h 0 ) + h 2 = p ker ( T ) ( h 0 ) 2 + h 2 2 = δ 2 + α 2 δ 2 = α .
Remark 3 allows easily solving the infinite-dimensional precise Tykhonov regularization in the Hilbert space context, that is Problem 17.
Theorem 8.
Let H , K be Hilbert spaces, T : H K a nonzero continuous linear operator with ker ( T ) { 0 } , k K , and  α δ : = dist 0 , T 1 p T ( H ) ( k ) . For every h 0 T 1 p T ( H ) ( k ) and every h 1 ker ( T ) \ { 0 } , an optimal solution of Problem 17 is given by p ker ( T ) ( h 0 ) + α 2 δ 2 h 1 h 1 .
Proof. 
First off, notice that, according to Remark 3,
p ker ( T ) ( h 0 ) + α 2 δ 2 h 1 h 1 = p ker ( T ) ( h 0 ) 2 + α 2 δ 2 h 1 h 1 2 = δ 2 + α 2 δ 2 = α .
This means p ker ( T ) ( h 0 ) + α 2 δ 2 h 1 h 1 belongs to the feasible region of Problem 17. Next, by applying Corollary 3, arg min T ( h ) k : h H = T 1 p T ( H ) ( k ) = h 0 + ker ( T ) , because  h 0 T 1 p T ( H ) ( k ) . In accordance with Theorem 2, p ker ( T ) ( h 0 ) T 1 p T ( H ) ( k ) . In fact, Theorem 2 ensures that arg min h : h T 1 p T ( H ) ( k ) = p ker ( T ) ( h 0 ) . Finally, by using again Corollary 3, if  h H with h = α , then
T p ker ( T ) ( h 0 ) + α 2 δ 2 h 1 h 1 k = T ( h 0 ) k = p T ( H ) ( k ) k = p T ( H ) ( k ) T ( h ) k .
   □
Corollary 5.
Let G R m × n with G 0 and ker ( G ) { 0 } , g R m , and  α δ : = min ψ 2 : ψ arg min G ψ g 2 . For every ψ 0 G 1 p ker ( G t ) ( g ) and every ψ 1 ker ( G ) \ { 0 } , p ker ( G ) ( ψ 0 ) + α 2 δ 2 ψ 1 ψ 1 is an optimal solution of (22).
Proof. 
It only suffices to call on Theorem 8 by taking H : = 2 n , K : = 2 m , T : = G , k : = g , h 0 : = ψ 0 , and  h 1 : = ψ 1 .    □
A MATLAB encoding for Corollary 5 is available in Appendix D.

4.3. A Generalization of Theorem 5

As the reader may notice, both Lemma 3 and Theorem 5 are technical results crucial for the development of this manuscript. The following theorem generalizes them.
Theorem 9.
Let X be a Banach Space. Let P : X X be a continuous linear projection such that I P = 1 . Let x 0 X . Then, dist ( x 0 , P ( X ) ) = min { x 0 y : y P ( X ) } = x 0 P ( x 0 ) . Even more, the previous min is attained at P ( x 0 ) .
Proof. 
It only suffices to observe that, for every y P ( X ) ,
x 0 P ( x 0 ) = ( I P ) ( x 0 ) = ( I P ) ( x 0 y ) x 0 y .
   □

5. Conclusions

Let us summarize all the optimization problems we have dealt with throughout this manuscript. The “inclusion” symbol means that the “contained” problem is a particular case of the “continent” problem. The “equal” symbol means that the two involved problems are equivalent in the sense that they have the same set of optimal solutions:
  • Problem 1 ⊆ Problem 7⊆ Problem 2.
  • Problem 3 = Problem 9⊆ Problem 4.
  • Problem 8 ⊆ Problem 10⊆ Problem 4 (assuming c < d ).
  • Problem 8 ⊆ Problem 3 (assuming c < d ).
  • Problem 5 ⊆ Problem 11⊆ Problem 6.
  • Problem 12 ⊆ Problem 14⊆ Problem 13.
  • Problem 15 ⊆ Problem 17⊆ Problem 16.
We have fully solved Problem 7 (Theorem 2), Problem 8 (Lemma 4), Problem 11 (Corollary 3), Problem 14 (Theorem 6), and Problem 17 (Theorem 8). As a consequence, Problem 1 (Corollary 1), Problem 5, Problem 12 (Corollary 4), and Problem 15 (Corollary 5) are automatically fully solved. With respect to Problem 10, its solution has been approached (Theorem 4).

Author Contributions

Conceptualization, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; methodology, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; software, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; validation, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; formal analysis, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; investigation, A.C.-J., J.A.V.-M., C.C.-S., and F.J.G.-P.; resources, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; data curation, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; writing—original draft preparation, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; writing—review and editing, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; visualization, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; supervision, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; project administration, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P.; funding acquisition, A.C.-J., J.A.V.-M., C.C.-S. and F.J.G.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science, Innovation and Universities of Spain under Grant Number PGC-101514-B-I00 and by the 2014-2020 ERDF Operational Programme of the Department of Economy, Knowledge, Business and University of the Regional Government of Andalusia under Grant Number FEDER-UCA18-105867. The APC was funded by the Department of Mathematics of the University of Cadiz.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to dedicate this paper to their dearest friend José María Guerrero-Rodríguez.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; nor in the decision to publish the results.

Abbreviations

The following abbreviation is used in this manuscript:
MRIMagnetic Resonance Imaging

Appendix A. Code for Problem 1

In this Appendix Section, we will describe the encoding corresponding to Corollary 1.

Appendix A.1. Pseudocode

The pseudocode for Problem 1 consists of the following lines:
  • Find ψ 0 R n such that G ψ 0 = g (solve G ψ 0 = g ).
  • Find a basis K = { v 1 , , v s } of ker ( G ) .
  • Find a basis O = { w 1 , , w n s } of ker ( G ) .
  • Take B : = K O = { v 1 , , v s , w 1 , , w n s } as an ordered basis of R n .
  • Find the coordinates ( α 1 , , α s , α s + 1 , , α n ) of ψ 0 with respect to B.
  • Define p ker ( G ) ( ψ 0 ) = α s + 1 w 1 + + α n w n s .

Appendix A.2. MATLAB Code

We will define a function MRI ( G , g ) of two inputs, G and g, which returns one output: p ker ( G ) ( ψ 0 ) = α s + 1 w 1 + + α n w n s .
  • function [sol] = MRI(G,g)
  •  x_0 = G\g;    % Pseudocode (1)
  •  n = length(x_0);
  •  K = null(G);   % Pseudocode (2)
  •  s = rank(K);
  •  O = null(K’);   % Pseudocode (3)
  •  B = [K,O];    % Pseudocode (4)
  •  X = B\x_0;    % Pseudocode (5)
  •  coord = X(s+1:n,1);
  •  sol = O*coord;   % Pseudocode (6)
  • end

Appendix B. Code for Problem 1 When m = 1

In this Appendix Section, we will describe the encoding corresponding to Corollary 2.

Appendix B.1. Pseudocode

The pseudocode for Problem 1 when m = 1 consists of the following line:
  • Find g G t 2 G t .

Appendix B.2. MATLAB Code

We will define a function fMRI ( G , g ) of two inputs, G and g, which returns one output: g G t 2 G t .
  •  
  • function [sol] = fMRI(G,g)
  •  a = norm(G’)
  •  b = g/a^2
  •  sol = b*G’     % Pseudocode (1)
  • end

Appendix C. Code for Problem 12

In this Appendix Section, we will describe the encoding corresponding to Corollary 4.

Appendix C.1. Pseudocode

The pseudocode for Problem 12 consists of the following lines:
  • Find a basis K = { v 1 , , v s } of ker G t .
  • Find a basis O = { w 1 , , w m s } of ker G t .
  • Take B : = K O = { v 1 , , v s , w 1 , , w m s } as an ordered basis of R m .
  • Find the coordinates ( α 1 , , α s , α s + 1 , , α m ) of g with respect to B.
  • Define p ker G t ( g ) = α s + 1 w 1 + + α m w m s .
  • Apply MRI G , p ker G t ( g ) .

Appendix C.2. MATLAB Code

We will define a function bTR ( G , g ) of two inputs, G and g, which returns one output: MRI G , p ker G t ( g ) .
  • function [sol] = bTR(G,g)
  •  m = size(G);
  •  K = null(G’);   % Pseudocode (1)
  •  s = rank(K);
  •  O = null(K’);   % Pseudocode (2)
  •  B = [K,O];     % Pseudocode (3)
  •  X = B\g;     % Pseudocode (4)
  •  coord = X(s+1:m,1);
  •  p = O*coord;    % Pseudocode (5)
  •  sol = MRI(G,p);   % Pseudocode (6)
  • end

Appendix D. Code for Problem 15

In this Appendix Section, we will describe the encoding corresponding to Corollary 5.

Appendix D.1. Pseudocode

The pseudocode for Problem 15 consists of the following lines:
  • Define ψ 0 : = bTR ( G , g ) .
  • Define δ : = ψ 0 2 .
  • Find ψ 1 R n \ { 0 } such that G ψ 1 = 0 (solve G ψ 1 = 0 ).
  • Define ψ 0 + α 2 δ 2 ψ 1 ψ 1 2 .

Appendix D.2. MATLAB Code

We will define a function pTR ( G , g , α ) of three inputs, G, g, and α , which returns one output: ψ 0 + α 2 δ 2 ψ 1 ψ 1 2 .
  • function[sol] = pTR(G,g,a)
  •  x_0 = bTR(G,g);         % Pseudocode (1)
  •  d = norm(x_0);         % Pseudocode (2)
  •  X = null(G);
  •  x_1 = X(:,1);         % Pseudocode (3)
  •  sol = x_0 + sqrt(a^2-d^2)*x_1/norm(x_1); % Pseudocode (4)
  • end

References

  1. Cobos-Sánchez, C.; García-Pacheco, F.J.; Moreno-Pulido, S.; Sáez-Martínez, S. Supporting vectors of continuous linear operators. Ann. Funct. Anal. 2017, 8, 520–530. [Google Scholar] [CrossRef]
  2. García-Pacheco, F.J.; Naranjo-Guerra, E. Supporting vectors of continuous linear projections. Int. J. Funct. Anal. Oper. Theory Appl. 2017, 9, 85–95. [Google Scholar] [CrossRef]
  3. James, R.C. Characterizations of reflexivity. Studia Math. 1964, 23, 205–216. [Google Scholar] [CrossRef] [Green Version]
  4. Lindenstrauss, J. On operators which attain their norm. Isr. J. Math. 1963, 1, 139–148. [Google Scholar] [CrossRef]
  5. Mititelu, Ş. Optimality and duality for invex multi-time control problems with mixed constraints. J. Adv. Math. Stud. 2009, 2, 25–35. [Google Scholar]
  6. Mititelu, Ş.; Treanţă, S. Efficiency conditions in vector control problems governed by multiple integrals. J. Appl. Math. Comput. 2018, 57, 647–665. [Google Scholar] [CrossRef]
  7. Treanţă, S.; Mititelu, Ş. Duality with (ρ,b)-quasiinvexity for multidimensional vector fractional control problems. J. Inf. Optim. Sci. 2019, 40, 1429–1445. [Google Scholar] [CrossRef]
  8. Treanţă, S.; Mititelu, Ş. Efficiency for variational control problems on Riemann manifolds with geodesic quasiinvex curvilinear integral functionals. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2020, 114, 15. [Google Scholar] [CrossRef]
  9. Bishop, E.; Phelps, R.R. A proof that every Banach space is subreflexive. Bull. Am. Math. Soc. 1961, 67, 97–98. [Google Scholar] [CrossRef] [Green Version]
  10. Bishop, E.; Phelps, R.R. The support functionals of a convex set. In Proceedings of the Symposia in Pure Mathematics. Am. Math. Soc. 1963, 7, 27–35. [Google Scholar]
  11. Choi, J.W.; Kim, M.K. Multi-Objective Optimization of Voltage-Stability Based on Congestion Management for Integrating Wind Power into the Electricity Market. Appl. Sci. 2017, 7, 573. [Google Scholar] [CrossRef] [Green Version]
  12. Susowake, Y.; Masrur, H.; Yabiku, T.; Senjyu, T.; Motin Howlader, A.; Abdel-Akher, M.; Hemeida, A.M. A Multi-Objective Optimization Approach towards a Proposed Smart Apartment with Demand-Response in Japan. Energies 2019, 13, 127. [Google Scholar] [CrossRef] [Green Version]
  13. Zavala, G.R.; García-Nieto, J.; Nebro, A.J. Qom—A New Hydrologic Prediction Model Enhanced with Multi-Objective Optimization. Appl. Sci. 2019, 10, 251. [Google Scholar] [CrossRef] [Green Version]
  14. Cobos Sánchez, C.; Garcia-Pacheco, F.J.; Guerrero Rodriguez, J.M.; Hill, J.R. An inverse boundary element method computational framework for designing optimal TMS coils. Eng. Anal. Bound. Elem. 2018, 88, 156–169. [Google Scholar] [CrossRef]
  15. Moreno-Pulido, S.; Garcia-Pacheco, F.J.; Cobos-Sanchez, C.; Sanchez-Alzola, A. Exact Solutions to the Maxmin Problem max∥Ax∥ Subject to ∥Bx∥≤1. Mathematics 2020, 8, 85. [Google Scholar] [CrossRef] [Green Version]
  16. Sánchez, C.C.; Rodriguez, J.M.G.; Olozábal, Á.Q.; Blanco-Navarro, D. Novel TMS coils designed using an inverse boundary element method. Phys. Med. Biol. 2016, 62, 73–90. [Google Scholar] [CrossRef]
  17. Sanchez, C.C.; Bowtell, R.W.; Power, H.; Glover, P.; Marin, L.; Becker, A.A.; Jones, A. Forward electric field calculation using BEM for time-varying magnetic field gradients and motion in strong static fields. Eng. Anal. Bound. Elem. 2009, 33, 1074–1088. [Google Scholar] [CrossRef]
  18. Marin, L.; Power, H.; Bowtell, R.W.; Cobos Sanchez, C.; Becker, A.A.; Glover, P.; Jones, I.A. Numerical solution of an inverse problem in magnetic resonance imaging using a regularized higher-order boundary element method. In Boundary Elements and Other Mesh Reduction Methods XXIX; WIT Press: Southampton, UK, 2007; Volume 44, pp. 323–332. [Google Scholar] [CrossRef] [Green Version]
  19. Marin, L.; Power, H.; Bowtell, R.W.; Cobos Sanchez, C.; Becker, A.A.; Glover, P.; Jones, A. Boundary element method for an inverse problem in magnetic resonance imaging gradient coils. CMES Comput. Model. Eng. Sci. 2008, 23, 149–173. [Google Scholar]
  20. Moreno-Pulido, S.; Sánchez-Alzola, A.; García-Pacheco, F. Revisiting the minimum-norm problem. J. Inequal. Appl. 2022, 22, 1–11. [Google Scholar] [CrossRef]
  21. García-Pacheco, F.J. Lineability of the set of supporting vectors. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2021, 115, 41. [Google Scholar] [CrossRef]
  22. Sánchez-Alzola, A.; García-Pacheco, F.J.; Naranjo-Guerra, E.; Moreno-Pulido, S. Supporting vectors for the 1-norm and the -norm and an application. Math. Sci. 2021, 15, 173–187. [Google Scholar] [CrossRef]
  23. Garcia-Pacheco, F.J.; Cobos-Sanchez, C.; Moreno-Pulido, S.; Sanchez-Alzola, A. Exact solutions to max∥x∥=1i=1Ti(x)∥2 with applications to Physics, Bioengineering and Statistics. Commun. Nonlinear Sci. Numer. Simul. 2020, 82, 105054. [Google Scholar] [CrossRef]
  24. Cobos-Sánchez, C.; Garcia-Pacheco, F.J.; Guerrero-Rodriguez, J.M.; Garcia-Barrachina, L. Solving an IBEM with supporting vector analysis to design quiet TMS coils. Eng. Anal. Bound. Elem. 2020, 117, 1–12. [Google Scholar] [CrossRef]
  25. Cobos-Sánchez, C.; Vilchez-Membrilla, J.A.; Campos-Jiménez, A.; García-Pacheco, F.J. Pareto Optimality for Multioptimization of Continuous Linear Operators. Symmetry 2021, 13, 661. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Campos-Jiménez, A.; Vílchez-Membrilla, J.A.; Cobos-Sánchez, C.; García-Pacheco, F.J. Analytical Solutions to Minimum-Norm Problems. Mathematics 2022, 10, 1454. https://doi.org/10.3390/math10091454

AMA Style

Campos-Jiménez A, Vílchez-Membrilla JA, Cobos-Sánchez C, García-Pacheco FJ. Analytical Solutions to Minimum-Norm Problems. Mathematics. 2022; 10(9):1454. https://doi.org/10.3390/math10091454

Chicago/Turabian Style

Campos-Jiménez, Almudena, José Antonio Vílchez-Membrilla, Clemente Cobos-Sánchez, and Francisco Javier García-Pacheco. 2022. "Analytical Solutions to Minimum-Norm Problems" Mathematics 10, no. 9: 1454. https://doi.org/10.3390/math10091454

APA Style

Campos-Jiménez, A., Vílchez-Membrilla, J. A., Cobos-Sánchez, C., & García-Pacheco, F. J. (2022). Analytical Solutions to Minimum-Norm Problems. Mathematics, 10(9), 1454. https://doi.org/10.3390/math10091454

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop