Next Article in Journal
Asymmetry Considerations in Constructing Control Charts: When Symmetry Is Not the Norm
Previous Article in Journal
Type-B Energetic Processes: Their Identification and Implications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minimization over Nonconvex Sets

by
José Antonio Vilchez Membrilla
1,*,†,
Víctor Salas Moreno
1,†,
Soledad Moreno-Pulido
2,†,
Alberto Sánchez-Alzola
3,†,
Clemente Cobos Sánchez
1,† and
Francisco Javier García-Pacheco
2,†
1
Department of Electronics, College of Engineering, University of Cádiz, 11510 Puerto Real, Spain
2
Department of Mathematics, College of Engineering, University of Cádiz, 11510 Puerto Real, Spain
3
Department of Statistics and Operation Research, College of Engineering, University of Cádiz, 11510 Puerto Real, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2024, 16(7), 809; https://doi.org/10.3390/sym16070809
Submission received: 28 May 2024 / Revised: 19 June 2024 / Accepted: 21 June 2024 / Published: 27 June 2024
(This article belongs to the Section Mathematics)

Abstract

:
Minimum norm problems consist of finding the distance of a closed subset of a normed space to the origin. Usually, the given closed subset is also asked to be convex, thus resulting in a convex minimum norm problem. There are plenty of techniques and algorithms to compute the distance of a closed convex set to the origin, which mostly exist in the Hilbert space setting. In this manuscript, we consider nonconvex minimum norm problems that arise from Bioengineering and reformulate them in such a way that the solution to their reformulation is already known. In particular, we tackle the problem of min x subject to R k ( x )     a k for k   =   1 , , l , where x X and R k : X Y are continuous linear operators between real normed spaces X , Y , and a k   >   0 for k   =   1 , , l . Notice that the region of constraints of the previous problem is neither convex nor balanced. However, it is additively symmetric, which is also the case for the objective function, due to the properties satisfied by norms, which makes possible the analytic resolution of such a nonconvex minimization. The recent literature shows that the design of optimal coils for electronics applications can be achieved by solving problems like this. However, in this work, we apply our analytical solutions to design an optimal coil for an electromagnetic sensor.
MSC:
47L05; 47L90; 49J30; 90B50

1. Introduction

Optimization problems consisting of minimizing the norm of a vector over a certain closed subset of a real normed space are classical problems in Optimization Theory, with plenty of applications found in Physics, Statistics, Electronics, Mechanics, etc. These problems can be approached either geometrically [1,2] or analytically [3,4]. The applications of these minimization problems include the optimal design of Transcranial Magnetic Stimulation (TMS) coils and Magnetic Resonance Imaging (MRI) coils [5,6,7,8], as well as the improvement of classical Statistical tools such as Principal Components [9]. The geometric study of minimum norm problems probably started with the metric notion of proximinality [10,11]. This notion, in the context of real normed spaces, led to remarkable results such as the famous James characterization of reflexivity in terms of norm-attaining functionals [12]. A few years later, James [13] found the existence of a noncomplete normed space in which every functional is norm-attaining (bear in mind that the fact for a functional to be norm-attaining is equivalent to that of the corresponding unit hyperplane to have a minimum norm element). Later on, Blatter [2] proved that every normed space in which each closed convex subset has a minimum norm element must be complete. This result was later improved in [1] (Theorem 2.1). Other interesting results related to proximinality and minimum norm elements can be found in [14,15,16,17,18]. The purpose of this manuscript is to approach the solution of the following minimization problem:
min x R k ( x )     a k k   =   1 , , l x X
where R k : X Y are continuous linear operators between real normed spaces X , Y , and a k   >   0 for k   =   1 , , l . This problem appears quite often when modeling TMS coils [3,19,20], but it has never been solved in an analytic way. By relying on the modern techniques of Functional Analysis, Operator Theory, and the Geometry of Banach Spaces, we will reformulate (1) to reduce it to a single-objective optimization problem for which there exists plenty of library material to solve it both analytically and computationally [21,22]. Observe that the region of constraints of (1) is neither convex nor balanced (except for the trivial case where all R k s are null operators). However, the region of constraints of (1) is additively symmetric, as well as for the objective function, due to the properties satisfied by norms, which makes possible the analytic reformulation (and hence resolution) of (1).

2. Materials and Methods

We deal with optimization problems in the context of Operator Theory and Functional Analysis. When these optimization problems carry more than one objective function, then two sets of solutions are considered: optimal solutions (feasible solutions that maximize/minimize all objective functions at once) and Pareto optimal solutions (feasible solutions with the property that if they are improved by another feasible solution in a certain objective function, then they improve that feasible solution in a different objective function). The set of optimal solutions and Pareto optimal solutions of an optimization problem are denoted by sol and Par , respectively. Refer to [23] for a wider perspective on Pareto optimal solutions. Special attention is paid to optimization problems involving continuous linear operators between real normed spaces, such as
max T i ( x ) i   =   1 , , m min S j ( x ) j   =   1 , , n x R
where T i , S j : X Y are continuous linear operators between real normed spaces X , Y , and R is a closed subset of X that is called the set of restrictions/constrains or the set of feasible solutions (all normed spaces considered throughout this manuscript are over the reals). For (2), the set of Pareto optimal solutions is interesting, since its set of optimal solutions is generally void, as shown in [21] (Theorem 2). Recall that a subset A of a real vector space X is called homogeneous provided that R A A , strictly homogeneous provided that ( R { 0 } ) A A , and positively homogeneous when R + A A . It is a straightforward observation that, if R is strictly or positively homogeneous, then so is Par ( 2 ) . On the other hand, notice that if ker ( S 1 ) ker ( S m ) R   =   { 0 } , then 0 Par ( 2 ) .
One important consideration is the fact that “subspace” refers to a subobject in a certain category, thus meaning that if we are working with the category of normed spaces, then “subspace” refers simply to a linear subspace, whereas if we are working with the category of Banach spaces, then “subspace” refers to a closed linear subspace.

3. Results

We begin this section by proving that (2) can be reformulated to a problem of the following form:
max T ( x ) min S ( x ) x R
Theorem 1. 
Let X , Y be normed spaces, T i , S j : X Y be continuous linear operators, i   =   1 , , m , j   =   1 , , n , and R be a closed subset of X. Consider the continuous linear operators
T : X 2 m ( Y ) :   =   Y 2 m 2 Y x T ( x ) :   =   T 1 ( x ) , , T m ( x )
and
S : X 2 n ( Y ) :   =   Y 2 n 2 Y x S ( x ) :   =   S 1 ( x ) , , S n ( x )
Then,
1. 
Par ( 3 ) Par ( 2 ) .
2. 
If sol ( 2 ) , then sol ( 2 )   =   sol ( 3 ) .
Proof. 
The proof is itemized according to the statement of the theorem:
1.
Fix an arbitrary x Par ( 3 ) . Suppose to the contrary that x Par ( 2 ) . Then, there exists y R satisfying at least one of the following two conditions:
  • There exists i 0 { 1 , , m } with T i 0 ( y )   >   T i 0 ( x ) , T i ( y )     T i ( x ) for all i { 1 , , m } { i 0 } and S j ( y )     S j ( x ) for all j { 1 , , n } .
  • There exists j 0 { 1 , , n } with S j 0 ( y )   <   S j 0 ( x ) , S j ( y )     S j ( x ) for all j { 1 , , n } { j 0 } and T i ( y )     T i ( x ) for all i { 1 , , m } .
We may assume, without any loss of generality, that the first condition holds. Notice that
T ( x ) 2   =   T 1 ( x ) 2 + + T m ( x ) 2   <   T 1 ( y ) 2 + + T m ( y ) 2   =   T ( y ) 2 ,
and
S ( x ) 2   =   S 1 ( y ) 2 + + S n ( y ) 2     S 1 ( x ) 2 + + S n ( x ) 2   =   S ( y ) 2 .
This contradicts the fact that x Par ( 3 ) .
2.
Since sol ( 2 ) , we have that Par ( 2 )   =   sol ( 2 ) . Notice that sol ( 3 ) Par ( 3 ) Par ( 2 )   =   sol ( 2 ) . It only remains to show that sol ( 2 ) sol ( 3 ) . Indeed, take any x sol ( 2 ) . Let y R . Then, T i ( x )     T i ( y ) for all i   =   1 , , m and S j ( x )     S j ( y ) for all j   =   1 , , n . Therefore,
T ( x ) 2   =   T 1 ( x ) 2 + + T m ( x ) 2     T 1 ( y ) 2 + + T m ( y ) 2   =   T ( y ) 2 ,
and
S ( x ) 2   =   S 1 ( y ) 2 + + S n ( y ) 2     S 1 ( x ) 2 + + S n ( x ) 2   =   S ( y ) 2 .
As a consequence, the arbitrariness of y shows that x sol ( 3 ) .
Thanks to Theorem 1, we can restrict the multioptimization (2) to maxmin problems of the form of (3). Our next result shows that (3) can be solved via the following (single-objective) optimization problem:
max T ( x ) S ( x )     1 x R
Theorem 2. 
Let X , Y be normed spaces, T , S : X Y be continuous linear operators, and R be a positively homogeneous closed subset of X. Then, we have the following:
1. 
Par ( 3 ) ker ( S ) ( R + { 0 } ) sol ( 4 ) .
2. 
If ( ker ( S ) R ) ker ( T ) , then Par ( 3 )   =   sol ( 4 )   =   .
3. 
If ker ( S ) R ker ( T ) , then ker ( S ) R Par ( 3 ) .
4. 
If ker ( S ) R ker ( T ) and R ker ( T ) , then sol ( 4 ) Par ( 3 ) { x X : S ( x )   =   1 } .
Proof. 
The proof is itemized according to the statement of the theorem:
1.
Fix an arbitrary x 0 Par ( 3 ) ker ( S ) . Then, S ( x 0 ) 0 . In this case, we can write x 0   =   S ( x 0 ) x 0 S ( x 0 ) . If we prove that y 0 :   =   x 0 S ( x 0 ) sol ( 4 ) , then we obtain that x 0   =   S ( x 0 ) y 0 R + sol ( 4 ) . Indeed, let us observe first that S ( y 0 )   =   1 and y 0   =   1 S ( x 0 ) x 0 R , since x 0 R and R are positively homogeneous. As a consequence, y 0 is a feasible solution of (4). Suppose on the contrary that y 0 is not an optimal solution of (4); in other words, y 0 sol ( 4 ) . Then, there exists z R such that S ( z )     1 and T ( z )   >   T ( y 0 ) . Then, we obtain that
T x 0 S ( x 0 )   <   T ( z ) ,
thus meaning that T ( x 0 )   <   T ( S ( x 0 ) z ) . Note that S ( x 0 ) z R . Since x 0 Par ( 3 ) by hypothesis, we reach the contradiction that S ( x 0 )   <   S ( S ( x 0 ) z )   =   S ( x 0 ) S ( z )     S ( x 0 ) .
2.
Fix z ( ker ( S ) R ) ker ( T ) . If there exists x sol ( 4 ) , then we can find an n N sufficiently large so that n T ( z )   >   T ( x ) . However, S ( n z )   =   n S ( z )   =   0     1 and n z R , which implies the contradiction that n T ( z )   =   T ( n z )     T ( x )   <   n T ( z ) . As a consequence, sol ( 4 )   =   . Suppose next that there exists y Par ( 3 ) . We can find a m N sufficiently large so that m T ( z )   >   T ( y ) . Then, T ( m z )   =   m T ( z )   >   T ( y ) and m z R , which implies the contradiction that S ( y )   <   S ( m z )   =   m S ( z )   =   0 . As a consequence, Par ( 3 )   =   .
3.
Fix an arbitrary x ker ( S ) R . If x Par ( 3 ) , then there exists y R such that T ( y )   >   T ( x ) and S ( y )     S ( x )   =   0 , thus concluding that y ker ( S ) R ker ( T ) and reaching the contradiction that T ( x )   <   T ( y )   =   0 .
4.
Fix an arbitrary y 0 sol ( 4 ) . We will prove first that S ( y 0 )   =   1 . So, suppose on the contrary that S ( y 0 )   <   1 . We distinguish between two cases:
  • S ( y 0 )   =   0 . In this case, T ( y 0 )   =   0 , so it only suffices to take any z R ker ( T ) to reach the contradiction that S z S ( z )   =   1 , z S ( z ) R , and
    T z S ( z )   =   T ( z ) S ( z )   >   0   =   T ( y 0 )
    (note that S ( z ) 0 , because z R ker ( T ) , and ker ( S ) R ker ( T ) ).
  • S ( y 0 ) 0 . In this case, it only suffices to observe that S y 0 S ( y 0 )   =   1 and y 0 S ( y 0 ) R , but
    T y 0 S ( y 0 )   =   T ( y 0 ) S ( y 0 )   >   T ( y 0 ) ,
    thus reaching a contradiction with the fact that y 0 sol ( 4 ) .
As a consequence, S ( y 0 )   =   1 . Next, suppose to the contrary that y 0 Par ( 3 ) . Then, there exists z R satisfying at least one of the following two conditions:
  • T ( z )   >   T ( y 0 ) and S ( z )     S ( y 0 ) . In this case, S ( z )     S ( y 0 )     1 and T ( z )   >   T ( y 0 ) , which directly contradicts that y 0 sol ( 4 ) .
  • S ( z )   <   S ( y 0 ) and T ( z )     T ( y 0 ) . In this case, S ( z )   <   S ( y 0 )   =   1 and T ( z )     T ( y 0 ) , thus, since y 0 sol ( 4 ) , it must occur that T ( z )   =   T ( y 0 ) , hence z sol ( 4 ) , which means that S ( z )   =   1 , and this contradicts that S ( z )   <   S ( y 0 )   =   1 .
The single-objective optimization (4) can in fact be reformulated into another single-objective optimization whose set of constraints is definitely not convex:
min S ( x ) T ( x )     1 x R
Notice that the set of restrictions of (5) is most likely nonconvex even if R is also convex.
Lemma 1. 
Let X , Y be normed spaces, T , S : X Y be continuous linear operators, and R be a positively homogeneous closed subset of X. If ker ( S ) R ker ( T ) , then sol ( 5 ) { x X : T ( x )   =   1 } .
Proof. 
Fix an arbitrary y 0 sol ( 5 ) . Notice that S ( y 0 ) 0 , since otherwise, we have that y 0 ker ( S ) R ker ( T ) , thus contradicting that T ( y 0 )     1 . If T ( y 0 )   >   1 , then it only suffices to observe that T y 0 T ( y 0 )   =   1 , and y 0 T ( y 0 ) R , but
S y 0 T ( y 0 )   =   S ( y 0 ) T ( y 0 )   <   S ( y 0 )
thus reaching a contradiction with the fact that y 0 sol ( 5 ) . □
Theorem 3. 
Let X , Y be normed spaces, T , S : X Y be continuous linear operators, and R be a positively homogeneous closed subset of X. If ker ( S ) R ker ( T ) and R ker ( T ) ; then, ( R + { 0 } ) sol ( 4 )   =   ( R + { 0 } ) sol ( 5 ) .
Proof. 
Fix an arbitrary x 0 sol ( 4 ) . We will prove that x 0 T ( x 0 ) sol ( 5 ) . So first, let us show that T ( x 0 ) 0 . Indeed, if T ( x 0 )   =   0 , then by taking any z R ker ( T ) , we reach the contradiction that S z S ( z )   =   1 , z S ( z ) R , and thus
T z S ( z )   =   T ( z ) S ( z )   >   0   =   T ( x 0 )
Therefore, T ( x 0 ) 0 . Next, according to Theorem 2(4), S ( x 0 )   =   1 . Suppose that x 0 T ( x 0 ) sol ( 5 ) . There exists y R with T ( y )     1 such that
S ( y )   <   S x 0 T ( x 0 )
Next, S ( T ( x 0 ) y )   <   S ( x 0 )   =   1 , T ( x 0 ) y R , and
T ( T ( x 0 ) y )   =   T ( x 0 ) T ( y )     T ( x 0 )
Since x 0 sol ( 4 ) , we conclude that T ( y )   =   1 , and T ( x 0 ) y sol ( 4 ) . In accordance with Theorem 2(4), S ( T ( x 0 ) y )   =   1 , thus contradicting the above assertion that S ( T ( x 0 ) y )   <   S ( x 0 )   =   1 . This proves that sol ( 4 ) ( R + { 0 } ) sol ( 5 ) . Conversely, fix an arbitrary y 0 sol ( 5 ) . We prove that y 0 S ( y 0 ) sol ( 4 ) . So first, let us show that S ( y 0 ) 0 . Indeed, if S ( y 0 )   =   0 , then y 0 ker ( S ) R ker ( T ) , which contradicts that T ( y 0 )   =   1 in view of Lemma 1. Thus, S ( y 0 ) 0 . Suppose on the contrary that y 0 S ( y 0 ) sol ( 4 ) . There exists x R with S ( x )     1 such that
T ( x )   >   T y 0 S ( y 0 )
Next, T ( S ( y 0 ) x )   >   T ( y 0 )   =   1 , S ( y 0 ) x R , and
S ( S ( y 0 ) x )   =   S ( y 0 ) S ( x )     S ( y 0 )
Since y 0 sol ( 5 ) , we conclude that S ( x )   =   1 and S ( y 0 ) x sol ( 5 ) . In accordance with Lemma 1, T ( S ( y 0 ) x )   =   1 , thus contradicting the above assertion that T ( S ( y 0 ) x )   >   T ( y 0 )   =   1 . This proves that sol ( 5 ) ( R + { 0 } ) sol ( 4 ) . □
At this stage, let us go back to (1). The region of constraints of (1) is
R : =   x X :   R k ( x )     a k k   =   1 , , l ,
which is most likely nonconvex. Special attention is paid to the subsets of R given by R 0 : =   x R   :   R k ( x )   =   a k k   =   1 , , l and R 1 : =   { x R : k { 1 , l } R k ( x )   =   a k } . Our next results are aimed at relating ( 1 ) with the multioptimization
max R k ( x ) k   =   1 , , l min x x X
This approach follows the ideas of the previous theorems, but it is more straightforward.
Theorem 4. 
Let X , Y be normed spaces, R k : X Y be continuous linear operators, lwt k   =   1 , , l , and lwt a k   >   0 for k   =   1 , , l . Then,
1. 
sol ( 1 ) R 1 .
2. 
If sol ( 1 ) R 0 , then sol ( 1 )   =   Par ( 6 ) R 0 .
Proof. 
The proof is itemized according to the statement of the theorem:
1.
Suppose to the contrary that there exists x 0 sol ( 1 ) R 1 . Then, 0   <   a k R k ( x 0 )   <   1 for all k { 1 , , l } , so we can take
0   <   ε : =   max a k R k ( x 0 ) : k { 1 , , l }   <   1 .
Observe that
ε     a k R k ( x 0 )
for all k { 1 , , l } . In particular,
R k ( x 0 ) a k     1 ε
for all k { 1 , , l } . Notice also that 0   <   ε   <   1 , so ε x 0   =   ε x 0   <   x 0 . If we prove that ε x 0 R , then we will reach a contradiction with the fact that x 0 sol ( 1 ) . Indeed, fix an arbitrary k { 1 , , l } . By relying on (7),
R k ε x 0   =   ε R k ( x 0 )   =   ε a k R k ( x 0 ) a k     ε a k 1 ε   =   a k
As a consequence, ε x 0 R , and we have obtained the desired contradiction. This contradiction forces that x 0 R 1 . Finally, the arbitrariness of x 0 implies that sol ( 1 ) R 1 .
2.
We prove first that sol ( 1 ) Par ( 6 ) R 0 . By hypothesis, sol ( 1 ) R 0 , so it only remains to show that sol ( 1 ) Par ( 6 ) . Suppose to the contrary that there exists x 0 sol ( 1 ) Par ( 6 ) . There exists y X satisfying one of the following two conditions:
  • R k 0 ( y )   >   R k 0 ( x 0 ) for some k 0 { 1 , , l } , R k ( y )     R k ( x 0 ) for all k { 1 , , l } { k 0 } , and y     x 0 . Notice that y R . Since x 0 sol ( 1 ) , we conclude that x 0     y . Thus, x 0   =   y , thus meaning that y sol ( 1 ) . By hypothesis, sol ( 1 ) R 0 ; hence, y R 0 , so R k ( y )   =   a k for all k   =   1 , , l . This contradicts the fact that R k 0 ( y )   >   R k 0 ( x 0 )   =   a k 0 .
  • y   <   x 0 and R k ( y )     R k ( x 0 ) for all k   =   1 , , l . In this situation, y R . Since x 0 sol ( 1 ) , we conclude that x 0     y , which contradicts that y   <   x 0 .
In both cases, we have obtained a contradiction. As a consequence, x 0 Par ( 6 ) . Conversely, let us prove now that sol ( 1 ) Par ( 6 ) R 0 . Suppose again to the contrary that there exists x 0 Par ( 6 ) R 0 sol ( 1 ) . Since x 0 R 0 R , there exists y R , with y   <   x 0 . However, x 0 Par ( 6 ) , which means that there exists k 1 { 1 , , l } such that R k 1 ( x 0 )   >   R k 1 ( y ) . Then, we obtain the following contradiction: a k 1   =   R k 1 ( x 0 )   >   R k 1 ( y )     a k 1 . As a consequence, x 0 sol ( 1 ) .
From Theorem 4, an immediate corollary can be inferred in (1), where it is assumed that k   =   1 . In this situation, R 0   =   R 1 .
Corollary 1. 
Let X , Y be normed spaces, R : X Y be a continuous linear operator, and let a   >   0 . Then, sol ( 1 )   =   Par ( 6 ) R 0 .

4. Discussion

Slight modifications to the proofs of Theorems 2 and 3 allow for reformulations of (4) and (5) into single-objective optimization problems of the forms
max T ( x ) S ( x ) S ( x ) 0 x R
and
min S ( x ) T ( x ) T ( x ) 0 x R
which may be approached using certain software based upon Heuristic algorithms (although we do not recommend the use of any Heuristic method unsupported by theoretical or mathematical proofs). With respect to (1), in virtue of Theorem 4, (1) can be studied through (6). Since (6) has the form of (2), it can be reformulated and solved by means of Theorems 1–3. As a consequence, the reformulation scheme is the following:
max T i ( x ) i   =   1 , , m min S j ( x ) j   =   1 , , n x R max T ( x ) min S ( x ) x R max T ( x ) S ( x )     1 x R min S ( x ) T ( x )     1 x R
Observe that the set of restrictions of (5) is R { x X : T ( x )     1 } , which is closed but not convex in general. If this nonconvexity issue wants to be overcome, then simply restrict to (4), whose set of constraints is R { x X :   S ( x )     1 } , which is convex provided that R is also convex.

5. Conclusions

According to Theorems 1–3, a Pareto optimal solution of (2) can be found if an optimal solution of (4) or (5) is computed. In [21], a solution of (4) is found under not so restrictive conditions in terms of the left inverses of continuous linear operators.

Author Contributions

Conceptualization, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; methodology, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; software, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; validation, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; formal analysis, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; investigation, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; resources, F.J.G.-P., V.S.M., J.A.V.M., S.M.-P., A.S.-A. and C.C.S.; data curation, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; writing—original draft preparation, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; writing—review and editing, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; visualization, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; supervision, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S.; project administration, F.J.G.-P., J.A.V.M., S.M.-P., A.S.-A. and C.C.S.; funding acquisition, F.J.G.-P., J.A.V.M., V.S.M., S.M.-P., A.S.-A. and C.C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Consejería de Universidad, Investigación e Innovación de la Junta de Andalucía: ProyExcel00780 and ProyExcel01036 and by the Ministerio de Ciencia e Innovación: TED2021-131704A-I00.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Application to Optimal Coil Design for Electronics Sensors

A direct application of Corollary 1 is used to optimally design a coil to characterize a magnetic measurement system. The following nonconvex minimization induces an optimal coil:
min ψ T L ψ B y ψ 2   =   1
We will turn (A1) into a problem of the form (1) for k   =   1 . Cholesky decomposition applies to L as L   =   C T C , thus obtaining ψ T L ψ   =   ( C ψ ) T ( C ψ )   =   C ψ 2 2 ; hence, (A1) turns into
min C ψ 2 B y ψ 2   =   1
Since C is an invertible square matrix, (A2) is reformulated as
min χ 2 B y C 1 χ 2   =   1
Notice that (A3) is already of the form (1) for k   =   1 . According to Corollary 1, sol ( A 3 )   =   Par ( A 4 ) R 0 , where
max B y C 1 χ 2 min χ 2
and R 0 : =   χ R N : B y C 1 χ 2   =   1 . Following the Discussion Section, the multioptimization (A4) can be solved via [21]. The resulting coils are displayed in Figure A1a–c.
Figure A1. (a) Conducting surface and Region of Interest (ROI) where the sensor is placed; (b) coil wires obtained from the solution of the optimization problem; (c) y component of the magnetic field in a z   =   0 plane. Dotted white lines indicate the location of the sensor.
Figure A1. (a) Conducting surface and Region of Interest (ROI) where the sensor is placed; (b) coil wires obtained from the solution of the optimization problem; (c) y component of the magnetic field in a z   =   0 plane. Dotted white lines indicate the location of the sensor.
Symmetry 16 00809 g0a1

References

  1. Aizpuru, A.; García-Pacheco, F.J. Reflexivity, contraction functions and minimum-norm elements. Studia Sci. Math. Hungar. 2005, 42, 431–443. [Google Scholar] [CrossRef]
  2. Blatter, J. Reflexivity and the existence of best approximations. In Approximation Theory, II (Proceedings International Symposium, University of Texas at Austin, 1976); Academic Press: New York, NY, USA; London, UK, 1976; pp. 299–301. [Google Scholar]
  3. Campos-Jiménez, A.; Vílchez-Membrilla, J.A.; Cobos-Sánchez, C.; García-Pacheco, F.J. Analytical solutions to minimum-norm problems. Mathematics 2022, 10, 1454. [Google Scholar] [CrossRef]
  4. Moreno-Pulido, S.; Sánchez-Alzola, A.; García-Pacheco, F.J. Revisiting the minimum-norm problem. J. Inequal. Appl. 2022, 2022, 22. [Google Scholar] [CrossRef]
  5. Wassermann, E.; Epstein, C.; Ziemann, U.; Walsh, V.; Paus, T.; Lisanby, S. Oxford Handbook of Transcranial Stimulation (Oxford Handbooks), 1st ed.; Oxford University Press: New York, NY, USA, 2008; Available online: http://gen.lib.rus.ec/book/index.php?md5=BA11529A462FDC9C5A1EF1C28E164A7D (accessed on 18 June 2024).
  6. Huang, N.; Ma, C.-F. Modified conjugate gradient method for obtaining the minimum-norm solution of the generalized coupled Sylvester-conjugate matrix equations. Appl. Math. Model. 2016, 40, 1260–1275. [Google Scholar] [CrossRef]
  7. Pissanetzky, S. Minimum energy MRI gradient coils of general geometry. Meas. Sci. Technol. 1992, 3, 667. [Google Scholar] [CrossRef]
  8. Romei, V.; Murray, M.M.; Merabet, L.B.; Thut, G. Occipital transcranial magnetic stimulation has opposing effects on visual and auditory stimulus detection: Implications for multisensory interactions. J. Neurosci. 2007, 27, 11465–11472. [Google Scholar] [CrossRef] [PubMed]
  9. Márquez, A.P.; García-Pacheco, F.J.; Mengibar-Rodríguez, M.; Sxaxnchez-Alzola, A. Supporting vectors vs. principal components. AIMS Math. 2023, 8, 1937–1958. [Google Scholar] [CrossRef]
  10. Singer, I. On best approximation in normed linear spaces by elements of subspaces of finite codimension. Rev. Roum. Math. Pures Appl. 1972, 17, 1245–1256. [Google Scholar]
  11. Singer, I. The theory of best approximation and functional analysis. In Vol. No. 13 of Conference Board of the Mathematical Sciences Regional Conference Series in Applied Mathematics; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1974. [Google Scholar]
  12. James, R.C. Characterizations of reflexivity. Studia Math. 1963, 23, 205–216. [Google Scholar] [CrossRef]
  13. James, R.C. A counterexample for a sup theorem in normed spaces. Israel J. Math. 1971, 9, 511–512. [Google Scholar] [CrossRef]
  14. Martín, M. On proximinality of subspaces and the lineability of the set of norm-attaining functionals of Banach spaces. J. Funct. Anal. 2020, 278, 108353. [Google Scholar] [CrossRef]
  15. Bandyopadhyay, P.; Li, Y.; Lin, B.-L.; Narayana, D. Proximinality in Banach spaces. J. Math. Anal. Appl. 2008, 341, 309–317. [Google Scholar] [CrossRef]
  16. García-Pacheco, F.J.; Rambla-Barreno, F.; Seoane-Sepúlveda, J.B. Q-linear functions, functions with dense graph, and everywhere surjectivity. Math. Scand. 2008, 102, 156–160. [Google Scholar] [CrossRef]
  17. Read, C.J. Banach spaces with no proximinal subspaces of codimension 2. Israel J. Math. 2018, 223, 493–504. [Google Scholar] [CrossRef]
  18. Rmoutil, M. Norm-attaining functionals need not contain 2-dimensional subspaces. J. Funct. Anal. 2017, 272, 918–928. [Google Scholar] [CrossRef]
  19. Koponen, L.M.; Nieminen, J.O.; Ilmoniemi, R.J. Minimum-energy coils for transcranial magnetic stimulation: Application to focal stimulation. Brain Stimul. 2015, 8, 124–134. [Google Scholar] [CrossRef]
  20. Koponen, L.M.; Nieminen, J.O.; Mutanen, T.P.; Stenroos, M.; Ilmoniemi, R.J. Coil optimisation for transcranial magnetic stimulation in realistic head geometry. Brain Stimul. 2017, 10, 795–805. [Google Scholar] [CrossRef] [PubMed]
  21. Moreno-Pulido, S.; Garcia-Pacheco, F.J.; Cobos-Sanchez, C.; Sanchez-Alzola, A. Exact solutions to the maxmin problem max‖Ax‖ subject to ‖Bx‖ ≤ 1. Mathematics 2020, 8, 85. [Google Scholar] [CrossRef]
  22. Garcia-Pacheco, F.J.; Cobos-Sanchez, C.; Moreno-Pulido, S.; Sanchez-Alzola, A. Exact solutions to maxx‖=1 i   =   1 Ti(x)‖2 with applications to Physics, Bioengineering and Statistics. Commun. Nonlinear Sci. Numer. Simul. 2020, 82, 105054. [Google Scholar] [CrossRef]
  23. Cobos-Sánchez, C.; Vilchez-Membrilla, J.A.; Campos-Jiménez, A.; García-Pacheco, F.J. Pareto optimality for multioptimization of continuous linear operators. Symmetry 2021, 13, 661. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vilchez Membrilla, J.A.; Salas Moreno, V.; Moreno-Pulido, S.; Sánchez-Alzola, A.; Cobos Sánchez, C.; García-Pacheco, F.J. Minimization over Nonconvex Sets. Symmetry 2024, 16, 809. https://doi.org/10.3390/sym16070809

AMA Style

Vilchez Membrilla JA, Salas Moreno V, Moreno-Pulido S, Sánchez-Alzola A, Cobos Sánchez C, García-Pacheco FJ. Minimization over Nonconvex Sets. Symmetry. 2024; 16(7):809. https://doi.org/10.3390/sym16070809

Chicago/Turabian Style

Vilchez Membrilla, José Antonio, Víctor Salas Moreno, Soledad Moreno-Pulido, Alberto Sánchez-Alzola, Clemente Cobos Sánchez, and Francisco Javier García-Pacheco. 2024. "Minimization over Nonconvex Sets" Symmetry 16, no. 7: 809. https://doi.org/10.3390/sym16070809

APA Style

Vilchez Membrilla, J. A., Salas Moreno, V., Moreno-Pulido, S., Sánchez-Alzola, A., Cobos Sánchez, C., & García-Pacheco, F. J. (2024). Minimization over Nonconvex Sets. Symmetry, 16(7), 809. https://doi.org/10.3390/sym16070809

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop