Next Article in Journal
Quantum Purity as an Information Measure and Nernst Law
Previous Article in Journal
Fault Diagnosis for Rolling Bearing of Combine Harvester Based on Composite-Scale-Variable Dispersion Entropy and Self-Optimization Variational Mode Decomposition Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Finite Complexity of Solutions in a Degenerate System of Quadratic Equations: Exact Formula

by
Olga Brezhneva
1,†,
Agnieszka Prusińska
2,*,† and
Alexey A. Tret’yakov
2,3,†
1
Department of Mathematics, Miami University, Oxford, OH 45056, USA
2
Faculty of Exact and Natural Sciences, Siedlce University, ul. Konarskiego 2, 08-110 Siedlce, Poland
3
Systems Research Institute, Polish Academy of Sciences, ul. Newelska 6, 01-447 Warszawa, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2023, 25(8), 1112; https://doi.org/10.3390/e25081112
Submission received: 23 May 2023 / Revised: 18 July 2023 / Accepted: 19 July 2023 / Published: 25 July 2023
(This article belongs to the Section Complexity)

Abstract

:
The paper describes an application of the p-regularity theory to Quadratic Programming (QP) and nonlinear equations with quadratic mappings. In the first part of the paper, a special structure of the nonlinear equation and a construction of the 2-factor operator are used to obtain an exact formula for a solution to the nonlinear equation. In the second part of the paper, the QP problem is reduced to a system of linear equations using the 2-factor operator. The solution to this system represents a local minimizer of the QP problem along with its corresponding Lagrange multiplier. An explicit formula for the solution of the linear system is provided. Additionally, the paper outlines a procedure for identifying active constraints, which plays a crucial role in constructing the linear system.
MSC:
65H10; 90C20; 65K05; 90C30

1. Introduction

Consider the nonlinear equation F ( x ) = 0 with the mapping F defined by:
F ( x ) = B [ x ] 2 + M x + N ,
where M R n × n is a matrix, N R n is a vector, and B : R n R n is the map defined for x R n by:
B [ x ] 2 = B ( x , x ) = x T B 1 x , , x T B n x T ,
where B i is an n × n symmetric matrix for i = 1 , , n .
We also consider the quadratic programming (QP) problem with inequality constraints:
minimize x f ( x ) = 1 2 x T Q x + c T x subject to A x b ( QP ) ,
where Q is an n × n symmetric matrix, A is an m × n matrix, c , x R n , and b R m .
The paper describes an application of the p-regularity theory to nonlinear equations with the mapping F introduced in (1) and to the quadratic programming problem (3).
In recent years, there has been growing interest in nonlinear problems, including quadratic and polynomial equations, as well as nonlinear optimization problems, attracting specialists from various disciplines (see, for example, refs. [1,2,3,4] and references therein). Furthermore, it was observed that nonlinear problems are closely related to singular problems, as demonstrated in [5]. In fact, it has been discovered that essentially nonlinear problems and singular problems are locally equivalent. In this work, we aim to provide a theoretical foundation for this claim by introducing several auxiliary concepts as proposed in [5].
Definition 1. 
Let V be a neighborhood of x * in R n , and let U R n be a neighborhood of 0. A mapping F : V R n , where F C 2 ( V ) , is considered essentially nonlinear at x * if there exists a perturbation of the form:
F ˜ ( x * + x ) = F ( x * + x ) + ω ( x ) , where ω ( x ) = o ( x ) ,
such that no nondegenerate transformation of coordinates φ ( x ) : U V , where φ C 1 ( U ) , satisfies φ ( 0 ) = x * , φ ( 0 ) = I n , where I n is the identity map in R n , and:
F ˜ ( φ ( x ) ) = F ˜ ( x * ) + F ˜ ( x * ) x for all x U .
Definition 2. 
We say that the mapping F is singular (or degenerate) at x * if it fails to be regular, meaning its derivative is not onto:
Im F ( x * ) R n .
The relationship between the notions of essential nonlinearity and singularity is established in Theorem 1, which was derived in [5].
Theorem 1. 
Let V be a neighborhood of x * in R n . Suppose F : V R n is C 2 and that x * is a solution of F ( x ) = 0 . Then F is essentially nonlinear at the point x * if and only if F is singular at the point x * .
The work presented in [5] primarily focuses on the construction of p-regularity and its applications in various areas of mathematics. However, it does not specifically cover quadratic nonlinear equations and quadratic programming problems. The current paper builds upon the foundation of the p-regularity theory established in [5] but introduces novel results. The main objective of this paper is to explore the key aspects of nonlinear problems, with a particular emphasis on systems of quadratic equations and quadratic programming problems that may involve singular solutions.
Specifically, we begin by considering the nonlinear equation F ( x ) = 0 . One of the main goals of the paper is to derive the exact formula for a solution x * of the nonlinear equation F ( x ) = 0 using the special form of the quadratic mapping F defined in (1). We demonstrate how to use a construction of a special 2-factor-operator to transform the original problem into a system of linear equations. The construction of the 2-factor-operator combines the mapping F with its first derivative F ( x ) .
In the second part of the paper, we apply a similar approach to the quadratic programming problem (3) in order to derive explicit formulas for the solution ( x * , λ * ) , where x * represents a local minimizer of the QP problem and λ * is the corresponding Lagrange multiplier. Namely, using the special form of the QP problem and the 2-factor-operator, we reduce the system of optimality conditions for the QP problem to a system of linear equations, with the point ( x * , λ * ) as its solution. The paper also describes a procedure for identifying the active constraints, which plays a vital role in constructing the linear system.
Although there is literature on solutions of degenerate systems of quadratic equations, the approach presented in this paper is novel and distinct from the methods proposed by other authors. This approach can be applied to various problems and areas of mathematics where the problem involves solving a degenerate equation F ( x ) = 0 with a quadratic mapping F. Such nonlinear problems can arise in the numerical solutions and analysis of ordinary differential equations, partial differential equations, optimal control problems, algebraic geometry, and other fields. In the second part of the paper, we specifically focus on using the methods developed in the first sections to solve the QP problem (3). The quadratic programming problems have attracted the attention of many researchers and scientists, so there is an extensive body of literature on the topic. Some publications in this area include [6,7,8,9,10,11,12].
The outline of the paper. The main contribution and novelty of the paper are in the exact formulas for a solution of a nonlinear equation and of the quadratic programming problems, presented in Section 3 and Section 5, respectively.
In Section 2, we recall the main definitions of the p-regularity theory, as presented in [5], including the special case of p = 2 . Additionally, we introduce the p-factor method for solving singular nonlinear equations of the form F ( x ) = 0 and describe various versions of the 2-factor method.
Section 3 presents some of the key results of the paper, focusing on the application of a modified 2-factor method to solve the nonlinear equation F ( x ) = 0 with the mapping F defined as F ( x ) = B [ x ] 2 + M x + N , where M R n × n is a matrix, N R n is a vector, and B : R n R n is defined by (2). In this section, we introduce multiple approaches to obtain exact formulas for a solution to the nonlinear equation F ( x ) = 0 , demonstrating that the proposed methods converge to a solution x * of the nonlinear equation in just one iteration.
Section 4 focuses on an auxiliary result used in other parts of the paper. We present a theorem that describes the properties of a special mapping μ ( x ) , which enables us to propose a procedure for determining r linearly independent vectors f i k ( x * ) , k = 1 , , r , at the solution x * of F ( x ) = 0 , without needing to know the exact value of x * . This procedure relies on information about the system of vectors { f 1 ( x ) , , f m ( x ) } at some point x within a small neighborhood of x * .
Section 5 presents other novel results, focusing on deriving exact formulas for a solution of quadratic programming problems. The section is divided into three parts. In Section 5.1, we consider regular quadratic programming problems and propose three approaches to solving the QP problem and obtaining a formula for its solution. These approaches are based on the construction of the 2-factor-operator. Section 5.2 addresses the issue of identifying the active constraints and proposes strategies for numerically determining the set of active constraints I ( x * ) . These techniques are then applied in the final part, Section 5.3, to address degenerate QPs. The paper concludes with some closing remarks in Section 6.
Notation. Let a i denote the rows of the m × n matrix A in problem (3), and let b = ( b 1 , , b m ) T , so that a i T R n and b i R for i = 1 , , m .
The active set  I ( x * ) at any feasible point x * of problem (3) is the set of indices of the active constraints at x * , i.e., I ( x * ) = { i = 1 , , m a i x * b i = 0 } .
Furthermore, Ker S = { x R n S x = 0 } denotes the null-space (kernel) of a given linear operator S : R n R m , and Im S = { y R m y = S x   for   some   x R n } is its image space.
Let B : R n × R n R n be a bilinear symmetric mapping. The 2-form associated with B is the map B [ · ] 2 : R n R n defined by B [ x ] 2 = B ( x , x ) for x R n . We also use the following notation: Im 2 B = { y R n B [ x ] 2 = y   for   some   x R n } and Ker 2 B = { x R n B [ x ] 2 = 0 } . We denote by N ( x * ) and N ε ( x * ) neighborhoods of a point x * , where N ε ( x * ) is an ε -neighborhood of x * , i.e., an open ball of radius ε centered at x * .
The notation for the scalar (dot) product of vectors x and y in R n , used in the paper, is x · y = x T y .
We denote by span ( a 1 , , a m ) the linear span of the given vectors a 1 , , a m . We also denote by d ( x , S ) the distance between a point x and a set S.

2. Elements of the p -Regularity Theory

We begin this section with the main definitions of the p-regularity theory, which are given in [5]. The primary focus is on the sufficiently smooth mapping F from R n to R n , defined as:
F ( x ) = ( f 1 ( x ) , , f n ( x ) ) T ,
where f i ( x ) : R n R for i = 1 , , n . After presenting the general case, we focus on the specific case of p = 2 . We introduce the definitions of the 2-regular mapping and the 2-factor-operator, which play a central role in the subsequent sections.

2.1. The Main Definitions and Constructions of the p-Regularity Theory

Throughout this section, we consider the nonlinear equation:
F ( x ) = 0 ,
where F is defined in Equation (4). Let x * R n represent a solution to the nonlinear Equation (5).
The mapping F is called regular at x * if:
Im F ( x * ) = R n ,
or in other words, if:
rank F ( x * ) = n ,
where F ( x * ) is the Jacobian matrix of the mapping F at x * . Conversely, the mapping F is called nonregular (irregular, degenerate) if the regularity condition (6) is not satisfied.
Let the space R n be decomposed into the direct sum:
R n = Y 1 Y p ,
where Y 1 = Im F ( x * ) , is defined as the closure of the image of the first derivative of F evaluated at x * , and p is chosen as the minimum number for which Equation (7) holds.
The remaining spaces are defined as follows. Let Z 1 = R n , and let Z 2 be a closed complementary subspace to Y 1 . Let P Z 2 : R n Z 2 be the projection operator onto Z 2 along Y 1 . Define Y 2 as the closed linear span of the image of the quadratic map P Z 2 F ( 2 ) ( x * ) [ · ] 2 . More generally, we define Y i inductively for i = 2 , , p 1 as:
Y i = span Im P Z i F ( i ) ( x * ) [ · ] i Z i ,
where Z i is a choice of a complementary subspace for ( Y 1 Y i 1 ) with respect to Y, i = 2 , , p , and P Z i : Y Z i is the projection operator onto Z i along ( Y 1 Y i 1 ) with respect to Y, i = 2 , , p . Finally, we let Y p = Z p .
Define the following mappings:
F i ( x ) : R n Y i , F i ( x ) = P Y i F ( x ) , i = 1 , , p ,
where P Y i : R n Y i is the projection operator onto Y i along ( Y 1 Y i 1 Y i + 1 Y p ) with respect to R n for i = 1 , , p . Then F can be represented as F ( x ) = F 1 ( x ) + + F p ( x ) or equivalently as F ( x ) = ( F 1 ( x ) , , F p ( x ) ) .
Definition 3. 
The linear operator Ψ p ( h ) L ( R n , Y 1 Y p ) , where h R n , h 0 , is defined by:
Ψ p ( h ) = F 1 ( x * ) + F 2 ( x * ) [ h ] + + F p ( p ) ( x * ) [ h ] p 1 ,
and is called the p-factor operator.
Consider the nonlinear operator Ψ p [ · ] p defined by:
Ψ p [ x ] p = F 1 ( x * ) [ x ] + F 2 ( x * ) [ x ] 2 + + F p ( p ) ( x * ) [ x ] p .
Notice that Ψ p [ x ] p = Ψ p ( x ) [ x ] .
Definition 4. 
The p-kernel of the operator Ψ p at the point x * is the set H p ( x * ) defined by:
H p ( x * ) = Ker p Ψ p ,
where:
Ker p Ψ p = { h R n F 1 ( x * ) [ h ] + F 2 ( x * ) [ h ] 2 + + F p ( p ) ( x * ) [ h ] p = 0 } .
Please note that Ker p Ψ p = k = 1 p Ker k F k ( k ) ( x * ) , where:
Ker k F k ( k ) ( x * ) = { ξ R n F k ( k ) ( x * ) [ ξ ] k = 0 }
is the k-kernel of F ( k ) ( · ) [ · ] k .
Definition 5. 
A mapping F is called p-regular at x * along h if Im Ψ p ( h ) = R n .
Definition 6. 
A mapping F is called p-regular at x * if it is p-regular along all h H p ( x * ) { 0 } or H p ( x * ) = { 0 } .
Now, we will focus on the special case of p = 2 , which we are using in the paper. We denote the image of the Jacobian matrix F ( x * ) by R 1 : R 1 = Im F ( x * ) , and the orthogonal complementary subspace of R 1 in R n by R 2 . Then:
R n = R 1 R 2 .
We also denote an n × n matrix of the orthogonal projection onto R i in R n by P R i , i = 1 , 2 .
Similarly to Equation (8), we introduce the mappings:
F i ( x ) = P R i F ( x ) , i = 1 , 2 .
The p-factor operator plays the central role in the p-regularity theory. We give the following definition of the p-factor-operator for p = 2 .
Definition 7. 
We define a 2–factor-operator of the mapping F at x * with respect to some vector h R n , h 0 , as a linear operator from R n to R n , defined by one of the following equations:
Ψ 2 ( h ) = F 1 ( x * ) + F 2 ( x * ) [ h ] ,
Ψ ¯ 2 ( h ) = F ( x * ) + P R 2 F ( x * ) [ h ] ,
Ψ ˜ 2 ( h ) = F ( x * ) + F ( x * ) [ h ] .
Now we are ready to introduce another very important definition of the 2-regularity theory.
Definition 8. 
The mapping F is called 2-regular at the point x * with respect to the element h if the image of a 2–factor-operator, defined by one of the Equations (9)–(11) is equal to R n .
Definition 9. 
The mapping F is called 2-regular at x * if it is 2-regular at x * with respect to all the elements h from a set that is defined as:
1. 
H 2 ( x * ) = Ker F 1 ( x * ) Ker 2 F 2 ( x * ) for Ψ 2 ( h ) defined by (9);
2. 
H ¯ 2 ( x * ) = Ker F ( x * ) Ker 2 P R 2 F ( x * ) for Ψ ¯ 2 ( h ) defined by (10);
3. 
H ˜ 2 ( x * ) = Ker F ( x * ) Ker 2 F ( x * ) for Ψ ˜ 2 ( h ) defined by (11).

2.2. The p-Factor-Method for Solving Singular Nonlinear Equations

In this section, we introduce the p-factor-method for solving the singular nonlinear equation F ( x ) = 0 . Then we consider the special case of p = 2 and describe several versions of the 2-factor-method.
Consider Equation (5) in the case when mapping F ( x ) is singular at x * . In this case, the p-factor method is an iterative procedure defined by:
x k + 1 = x k F ( x k ) + P 2 F ( x k ) [ h ] + + P p F ( p ) ( x k ) [ h ] p 1 1 · F ( x k ) + P 2 F ( x k ) [ h ] + + P p F ( p 1 ) ( x k ) [ h ] p 1 ,
where k = 0 , 1 , , P i = P Y i for i = 1 , 2 , , p , and vector h, h = 1 , is chosen in such a way that the p-factor operator Ψ p is nonsingular, which implies that the mapping F is p-regular at x * along h. The following theorem is valid for the p-factor-method (12).
Theorem 2. 
Assume that mapping F C p + 1 ( R n ) and there exists vector h, h = 1 , such that the p-factor operator Ψ p is nonsingular. Given a point x 0 N ε ( x * ) , where ε > 0 is sufficiently small and N ε ( x * ) is a neighborhood of x * , the sequence { x k } defined by Equation (12) converges quadratically to the solution x * of (5):
x k + 1 x * C x k x * 2 , k = 0 , 1 , ,
where C > 0 is an independent constant.
Now, we are ready to describe several versions of the 2-factor-method.
For solving singular nonlinear Equation (5), the following iterative method, called the 2-factor-method, was proposed in [13]:
x k + 1 = x k F ( x k ) + P R 2 F ( x k ) [ h ] 1 F ( x k ) + P R 2 F ( x k ) [ h ] , k = 0 , 1 , 2 , ,
where the vector h, h = 1 , is chosen in such a way that matrix F ( x * ) + P R 2 F ( x * ) [ h ] is invertible.
The following theorem states the convergence properties of the 2-factor-method (14).
Theorem 3. 
Given a mapping F C 3 ( R n ) , let x * be a solution of Equation (5). Assume that there exists a vector h R n such that h = 1 and F is 2-regular at the point x * with respect to the vector h with the 2-factor-operator Ψ ¯ 2 ( h ) defined by (10).
Then there is a neighborhood N ( x * ) of x * in R n such that for any x 0 N ( x * ) , the sequence { x k } generated by the 2-factor-method (14) converges to x * and:
x k + 1 x * C x k x * 2 ,
where C > 0 is some constant.
Proof. 
Since P R 2 is the orthoprojector onto subspace R 2 = R 1 , then for the mapping:
Φ ( x ) = F ( x ) + P R 2 F ( x ) [ h ] ,
we have Φ ( x * ) = 0 . Moreover, because Φ ( x * ) = Ψ ¯ 2 ( h ) and the mapping F is 2-regular with respect to the vector h, by Definition 8 with Ψ ¯ 2 ( h ) defined by (10), we obtain that Im Ψ ¯ 2 ( h ) = R n . Hence, the matrix Φ ( x * ) is invertible.
Therefore, the 2-factor-method given in (14) is an application of Newton’s method to system Φ ( x ) = 0 in a sufficiently small neighborhood of x * . Then the statement of the theorem follows from the properties of Newton’s method [14] (Proposition 1.4.1). □
Now, we will introduce a modified version of the 2-factor-method (14). Assume that there exists a vector h 0 such that F ( x * ) h = 0 and the matrix ( F ( x * ) + F ( x * ) [ h ] ) is invertible. Then for solving Equation (5), we can use the following modified 2-factor-method:
x k + 1 = x k F ( x k ) + F ( x k ) [ h ] 1 F ( x k ) + F ( x k ) [ h ] , k = 0 , 1 , 2 , .
The following theorem states the convergence properties of method (16).
Theorem 4. 
Given a mapping F C 3 ( R n ) , let x * be a solution of Equation (5). Assume that there exists a vector h R n such that h = 1 , F ( x * ) h = 0 , and F is 2-regular at the point x * with respect to the vector h, where the 2-factor-operator Ψ ˜ 2 ( h ) is defined by (11).
Then there is a neighborhood N ( x * ) of x * in R n such that for any x 0 N ( x * ) , the sequence { x k } generated by the 2-factor-method (16) converges to x * , and relation (15) holds with some C > 0 .
The proof is similar to one of Theorem 3.
Now we introduce another version of the 2-factor-method.
Assume that the following conditions hold:
1 .   There   exists   a   vector   h 0   such   that   F ( x * ) h = 0 ; 2 .   Matrix   ( F ( x * ) [ h ] )   is   invertible .
Note that if conditions (17) are satisfied, then to solve Equation (5), we can use the following modified 2-factor-method:
x k + 1 = x k F ( x k ) [ h ] 1 F ( x k ) [ h ] , k = 0 , 1 , 2 , .
For numerical realization of the 2-factor-method in the form (18), we only have to construct vector h satisfying conditions (17). Specifics of some problems allow us to choose vector h without any knowledge of the solution x * . We discuss the choice of the vector h in the following sections of the paper.
The following theorem states the convergence properties of method (18).
Theorem 5. 
Given a mapping F C 3 ( R n ) , let x * be a solution of Equation (5). Assume that there exists a vector h R n such that h = 1 and conditions (17) are satisfied.
Then, there is a neighborhood N ( x * ) of x * in R n such that for any x 0 N ( x * ) , the sequence { x k } generated by the 2-factor-method (18) converges to x * and relation (15) holds with some C > 0 .
The proof is similar to one of Theorem 3.

3. Nonlinear Equations with Quadratic Mappings: the Exact Solution Formula

In this section, we consider the mapping F defined by Equation (1) as follows:
F ( x ) = B [ x ] 2 + M x + N ,
where M R n × n is a matrix, N R n is a vector, and B : R n R n is the map defined by (2). The mapping B is twice continuously differentiable [15], and its derivatives are given by ( B [ x ] 2 ) = 2 B [ x ] and ( B [ x ] 2 ) = 2 B for some arbitrary x R n . Let x * denote a solution of the equation F ( x ) = 0 .
We will now illustrate the application of the 2-factor method (18) for solving the nonlinear equation F ( x ) = 0 with the mapping F defined by (1). We will present multiple approaches to obtain an exact formula for x * , with the first approach being a specific case of the second approach. Additionally, we will show that for the mapping F, the method (18) converges to x * in just one iteration.
  • First approach to obtain an exact formula for the solution x * .
For the mapping F defined by (1), the assumptions (17) of Theorem 5 can be simplified to the existence of a vector h that satisfies the following conditions:
( 1 ) ( 2 B [ x * ] + M ) h = 0 ; ( 2 ) matrix   ( 2 B [ h ] ) is   invertible .
Under these assumptions (19), for the mapping F defined by (1) and a given point x 0 , the first iteration of the 2-factor-method (18) can be written as:
x 1 = x 0 2 B [ h ] 1 ( 2 B [ x 0 ] + M ) [ h ] ,
which is equivalent to:
( 2 B [ h ] ) ( x 1 x 0 ) = ( 2 B [ x 0 ] + M ) [ h ] .
Using the property 2 B ( h , x 0 ) = 2 B ( x 0 , h ) , the last equation implies a one-step method for calculating x 1 and, consequently, finding the solution x * :
x * = 1 2 ( B [ h ] ) 1 ( M h ) ,
where the vector h satisfies conditions (19).
The numerical determination of the vector h depends on the specific characteristics of the problem. Alternatively, it can be obtained using the same method as described in the third approach below, which involves transforming the initial system into a system that is completely degenerate at the point x * .
  • Second approach to obtain an exact formula for the solution x * .
Now we present an alternative approach for obtaining a formula for the solution x * of the equation F ( x ) = 0 using the same mapping F defined by (1). This second approach is applicable to a broader variety of problems compared to the first approach.
Let P 1 denote the projector onto Y 1 = span ( Im 2 B ) , and let P 2 denote the projector onto Y 1 , which is the orthogonal complementary subspace of Y 1 in R n . We note that P 2 ( B [ x * ] 2 ) = 0 and:
4 P 2 B ( x * , x ) = P 2 ( B [ x + x * ] 2 B [ x x * ] 2 ) = 0 .
Then for the mapping F defined in (1),
P 1 F ( x * ) = P 1 ( 2 B [ x * ] + M )   and   P 2 F ( x * ) = P 2 M .
Assume that there exists a vector h R n satisfying the conditions:
( 1 ) P 1 ( 2 B [ x * ] + M ) h = 0 ; ( 2 ) matrix   ( P 2 M + 2 P 1 B [ h ] )   is   invertible .
Given the definition of P 2 , it follows that P 2 B [ x * ] 2 = 0 . Substituting this into (1), we obtain P 2 ( M x * + N ) = 0 . Hence, the point x * satisfies the following identities:
P 2 ( M x * + N ) = 0 , P 1 ( 2 B [ x * ] + M ) h = 0 .
By adding these equations and assuming (21), we obtain the exact formula for the solution x * :
x * = ( P 2 M + 2 P 1 B [ h ] ) 1 ( P 2 N + P 1 M h ) .
Remark 1. 
In the case when P 1 = I n and, hence, P 2 = O n × n , assumptions (21) become (19), and Equation (22) reduces to (20).
Example 1. 
Consider mapping F : R 2 R 2 given by:
F ( x ) = x 1 2 x 2 2 2 x 1 + 1 x 1 x 2 x 2 .
We can represent the mapping F in the form (1) with:
B = 1 0 0 1 0 1 / 2 1 / 2 0 , M = 2 0 0 1 , a n d N = 1 0 .
The equation F ( x ) = 0 has a locally unique solution x * = ( 1 , 0 ) T . In this example, P 1 = I 2 and P 2 = O 2 × 2 . Hence, by Remark 1, we apply Equation (20) with h = ( 1 , 0 ) T to obtain:
x * = 1 2 ( B h ) 1 ( M h ) = 1 2 1 0 0 2 2 0 = 1 0 ,
as claimed.
In a numerical implementation, an additional procedure is required to construct the vector h. Since the exact point x * is not known in advance, we only assume that a sufficiently small neighborhood of x * is provided to apply the procedure.
  • Third approach to obtain an exact formula for the solution x * .
While the first two approaches rely on knowledge of the element h, which is determined by x * , the third approach does not require such knowledge. Instead, all we need is for the starting point to belong to a sufficiently small neighborhood N ε ( x * ) of x * . Specifically, we have x 0 N ε ( x * ) , where ε > 0 is sufficiently small.
Suppose that at the point x * , the first r vectors { f 1 ( x * ) , , f r ( x * ) } are linearly independent, where f i is defined in (4) for i = 1 , , r . Assume also that the other vectors { f r + 1 ( x * ) , , f n ( x * ) } are linear combinations of the first r vectors. Therefore, there exist coefficients α i j such that:
f j ( x * ) = i = 1 r α i j f i ( x * ) , j = r + 1 , , n .
Let us introduce the subspace L ( x ) defined by:
L ( x ) = span ( f 1 ( x ) , , f r ( x ) ) .
We denote the orthogonal projection on the subspace L ( x ) as P L ( x ) . Then, there exist coefficients α i j ( x ) such that:
P L ( x ) f j ( x ) = i = 1 r α i j ( x ) f i ( x ) , j = r + 1 , , n .
In addition, introduce the notation:
f ^ j ( x ) = f j ( x ) P L ( x ) f j ( x ) , j = r + 1 , , n .
Then:
f ^ j ( x ) = f j ( x ) i = 1 r α i j ( x ) f i ( x ) , j = r + 1 , , n .
Notice that x * is also a solution of the equation F ^ ( x ) = 0 , where F ^ ( x ) is defined as:
F ^ ( x ) = ( f 1 ( x ) , , f r ( x ) , f ^ r + 1 ( x ) , , f ^ n ( x ) ) T .
The definition of F ^ ( x ) implies that F ^ is 2-regular at the point x * . In the case that some of the vectors f k ( x * ) , k { r + 1 , , n } , are not zero vectors, transformation (24) can be used to reduce those vectors to zero vectors. This ensures that f ^ k ( x * ) = 0 for all k { r + 1 , , n } . Therefore, without loss of generality, we can assume that the mapping F ( x ) satisfies f j ( x * ) = 0 for j = r + 1 , , n . An example of a mapping that satisfies these conditions is:
F ( x ) = x 1 + x 2 x 1 x 2 ,
where n = 2 , r = 1 , j = 2 , f 1 ( x ) = x 1 + x 2 , f 2 ( x ) = x 1 x 2 , and x * = ( 0 , 0 ) T .
Suppose there exist vectors ξ 1 0 , , ξ r 0 , and h 0 , and indices k i { r + 1 , , n } , i = 1 , , r , such that the system:
f k 1 ( x * ) ξ 1 , , f k r ( x * ) ξ r , f r + 1 ( x * ) h , , f n ( x * ) h
is linearly independent.
Then the mapping F ¯ ( x ) defined by:
F ¯ ( x ) = f k 1 ( x ) · ξ 1 , , f k r ( x ) · ξ r , f r + 1 ( x ) · h , , f n ( x ) · h T
has x * as its zero, that is F ¯ ( x * ) = 0 . At the same time, compared to the Jacobian matrix of F ( x ) , the matrix:
F ¯ ( x * ) = ( f k 1 ( x * ) ξ 1 ) T ( f k r ( x * ) ξ r ) T ( f r + 1 ( x * ) h ) T ( f n ( x * ) h ) T
is nonsingular. We can, therefore, consider the method:
x k + 1 = x k F ¯ ( x k ) 1 F ¯ ( x k ) , k = 1 , 2 , .
Theorem 6. 
Given a mapping F C 2 ( R n ) , let x * be a solution of Equation (5). Assume that there exist vectors ξ 1 0 , , ξ r 0 , and h 0 , such that mapping F ¯ ( x ) defined in (25) is nonsingular at x * . Let x 0 N ε ( x * ) , where N ε ( x * ) is a neighborhood of x * and ε > 0 is sufficiently small.
Then the sequence { x k } , k = 1 , 2 , defined by (26) is convergent to the point x * with the quadratic rate of convergence, that is:
x k + 1 x * C x k x * 2 ,
where C > 0 is an independent constant.
Using definition of mapping F given by Equation (1), mappings f i introduced in (4) will have the following form:
f i ( x ) = x T B i x + M i T x + N i , i = 1 , 2 , , n ,
where B i is an n × n symmetric matrix, M i R n , and N i R , i = 1 , 2 , , n .
Given an initial point x 0 , we use the iterative method (26) to obtain:
x 1 = x 0 ( 2 B k 1 ξ 1 ) T ( 2 B k r ξ r ) T ( 2 B r + 1 h ) T ( 2 B n h ) T 1 ( 2 B k 1 x 0 + M k 1 ) T ξ 1 ( 2 B k r x 0 + M k r ) T ξ r ( 2 B r + 1 x 0 + M r + 1 ) T h ( 2 B n x 0 + M n ) T h =
= ( 2 B k 1 ξ 1 ) T ( 2 B k r ξ r ) T ( 2 B r + 1 h ) T ( 2 B n h ) T 1 ( 2 B k 1 ξ 1 ) T x 0 ( 2 B k r ξ r ) T x 0 ( 2 B r + 1 h ) T x 0 ( 2 B n h ) T x 0 ( 2 B k 1 x 0 ) T ξ 1 ( 2 B k r x 0 ) T ξ r ( 2 B r + 1 x 0 ) T h ( 2 B n x 0 ) T h M k 1 T ξ 1 M k r T ξ r M r + 1 T h M n T h .
Because matrix B i is symmetric for any index i, then for any index j, we have:
( 2 B i ξ j ) T x 0 = ( 2 B i ξ j ) · x 0 = ξ j · ( 2 B i T x 0 ) = ξ j · ( 2 B i x 0 ) = ( 2 B i x 0 ) T ξ j .
Therefore,
x 1 = 1 2 ( B k 1 ξ 1 ) T ( B k r ξ r ) T ( B r + 1 h ) T ( B n h ) T 1 M k 1 · ξ 1 M k r · ξ r M r + 1 · h M n · h = x *
Example 1 
(Continuation). Consider mapping F : R 2 R 2 defined in (23):
F ( x ) = x 1 2 x 2 2 2 x 1 + 1 x 1 x 2 x 2 ,
where
B 1 = 1 0 0 1 , B 2 = 0 1 / 2 1 / 2 0 ,
M 1 = ( 2 , 0 ) T , M 2 = ( 0 , 1 ) T , N 1 = 1 , N 2 = 0 .
In this example, x * = ( 1 , 0 ) T is a solution of F ( x ) = 0 and:
f 1 ( x * ) = ( 2 x 1 * 2 , 2 x 2 * ) T = ( 0 , 0 ) T , f 2 ( x * ) = ( x 2 * , x 1 * 1 ) T = ( 0 , 0 ) T .
Therefore, mapping F ¯ defined in (25) takes the form:
F ¯ ( x ) = f 1 ( x ) · h f 2 ( x ) · h ,
where h is chosen in such a way that the matrix F ¯ ( x * ) is nonsingular, and vectors ξ i are not used. For example, we can take h = ( 1 , 0 ) T . Then Equation (27) has the form:
x 1 = 1 2 ( B 1 h ) T ( B 2 h ) T 1 M 1 · h M 2 · h = 1 2 1 0 0 1 / 2 1 2 0 = 1 / 2 0 0 1 2 0 = 1 0 ,
which is a solution of F ( x ) = 0 in this example.
The approaches described above can be modified to derive other methods for solving the equation F ( x ) = 0 . For example, using the equation F ( x ) T h = 0 , where h Ker F ( x * ) T , we obtain the following method:
x k + 1 = x k ( F ( x k ) T ) h 1 ( F ( x k ) ) T h , k = 0 , 1 , .
The sequence { x k } converges to x * under the assumption that ( F ( x * ) T ) h is nonsingular. In this modification, unlike the second approach, we can construct an element h without the knowledge of the point x * , based on the information at an initial point x 0 .
Applying the modified method to Example 1, we obtain the same formulas and results as shown in Equation (28) above. To implement this approach, it is necessary to determine the vectors f i ( x ) , i = 1 , 2 , , n , which correspond to linearly independent vectors f i ( x * ) , i { 1 , 2 , , n } . This can be achieved using information at a point x 0 N ε ( x * ) , where ε > 0 is sufficiently small. If the assumption of p-regularity is satisfied, the identification of linearly independent vectors is performed using the method described in the next section.

4. Procedure for Identifying Zero Elements

The procedure for identifying zero elements could be used to implement the methods described in the previous sections numerically. Let F ( x ) : R n R m be defined as:
F ( x ) = ( f 1 ( x ) , , f m ( x ) ) T .
In this section, we present a theorem that describes the properties of a special mapping μ ( x ) , which allows us to propose the method for determining r linear independent vectors f i k ( x * ) , k = 1 , , r , at the solution x * of F ( x ) = 0 . This procedure is based on the information about the system of vectors { f 1 ( x ) , , f m ( x ) } at some point x in a small neighborhood of x * . As a result, we can define the mapping F ^ ( x ) with the first r components f i k ( x ) , k = 1 , , r , corresponding to the linearly independent vectors f i k ( x * ) , k = 1 , , r .
Let F C 3 ( R n ) be 2-regular at the point x * . For some x N ε ( x * ) , where ε is sufficiently small, we define the following mappings:
ρ ( x ) = min i = 1 , , m d f i ( x ) , span ( f 1 ( x ) , , f i 1 ( x ) , f i + 1 ( x ) , , f m ( x ) )
and:
μ ( x ) = max F ( x ) 1 2 , ρ ( x ) ,
where d ( x , S ) denotes the distance between an element x and the set S. Note that if F ( x ) = f 1 ( x ) , then ρ ( x ) = f 1 ( x ) .
The mapping μ ( x ) is used to determine the maximum number r of linearly independent vectors in the system { f 1 ( x * ) , , f m ( x * ) } using a special procedure that relies on the information about the mapping F ( x ) at the point x N ε ( x * ) . The properties of the mapping μ ( x ) are stated in the following theorem, and the proof can be found in [16].
Theorem 7 
(Minorant theorem). Let F C 3 ( R n ) be 2-regular at the point x * , and F ( x * ) = 0 . Then there exist constants ε > 0 ,   C > 0 , and C > 0 such that the following inequality holds for any x N ε ( x * ) :
C x x * μ ( x ) C x x * 1 2 ,
where function μ ( x ) is defined in (30).
In addition to the properties of the mapping μ ( x ) given in Theorem 7, we also need the following lemma (for the proof, see [16]).
Lemma 1. 
For the non-negative mappings g ( x ) and μ ( x ) , let the following inequalities hold:
| g ( x 1 ) g ( x 2 ) | L x 1 x 2 for all x 1 , x 2 N σ ( x * ) ,
C x x * μ ( x ) C x x * 1 p for all x N σ ( x * ) ,
where L , C , C , and σ are positive constants, with C C and p 2 .
Then, there exists a sufficiently small ε > 0 such that one of the following conditions holds:
1. 
If g ( x ) μ ( x ) for all x N ε ( x * ) , then g ( x * ) = 0 .
2. 
If g ( x ) > μ ( x ) for all x N ε ( x * ) , then g ( x * ) 0 .
Remark 2. 
Based on the assumptions of Lemma 1, there exists a sufficiently small ε > 0 such that if the inequality g ( x ¯ ) μ ( x ¯ ) is satisfied for any x ¯ N ε ( x * ) , then the inequality g ( x ) μ ( x ) is satisfied for all x N ε ( x * ) , and hence g ( x * ) = 0 .
Similarly, if the inequality g ( x ¯ ) > μ ( x ¯ ) is satisfied for any x ¯ N ε ( x * ) , then the inequality g ( x ) > μ ( x ) is satisfied for all x N ε ( x * ) , and hence g ( x * ) 0 .
Now we are ready to introduce an iterative method that determines indices 1 , , r corresponding to the linearly independent vectors f i ( x * ) , i = 1 , , r .
  • Method for determining linearly independent gradients at x * (identifying zero elements).
Using Lemma 1 and Remark 2, for a sufficiently small ε > 0 , x N ε ( x * ) , and i = 1 , , m , consider two possible cases:
Case 1.  g ( x ) = f i ( x ) μ ( x ) .
Case 2.  g ( x ) = f i ( x ) > μ ( x ) .
In Case 1, according to Remark 2, it follows that f i ( x * ) = 0 , whereas in Case 2, we have f i ( x * ) 0 .
  • In addition to the properties of the mapping
Let F ( x ) : R n R m be defined by (29) and x * be a solution of F ( x ) = 0 . Let x be in N ε ( x * ) , where ε is sufficiently small. Define function μ ( x ) using Equation (30).
  • Step 1. Identify the smallest index i 1 in the set S 1 = { 1 , , m } such that f i 1 ( x ) > μ ( x ) . According to Case 2 above, this implies that f i 1 ( x * ) 0 .
  • Step 2. Use Step 1 to identify if the set S 2 = { 1 , , m } { i 1 } has at least one index j such that f j ( x * ) 0 . If it does not, the method is finished. Otherwise, identify the next smallest index i 2 in the set S 1 such that the following condition is satisfied:
    d f i 2 ( x ) , span f i 1 ( x ) > μ ( x ) .
According to Case 2 above, it means that the vectors f i 1 ( x * ) and f i 2 ( x * ) are linearly independent.
  • Step k. By this step, we have identified k 1 linearly independent vectors f i 1 ( x * ) , , f i k 1 ( x * ) , where k = 3 , 4 , , m . Use Step 1 to identify if the set S k = { 1 , , m } { i 1 , , i k 1 } has at least one index j such that f j ( x * ) 0 . If it does not, the method is finished. Otherwise, identify the next smallest index i k { 1 , , m } { i 1 , , i k 1 } such that the following condition is satisfied:
d f i k ( x ) , span f i 1 ( x ) , , f i k 1 ( x ) > μ ( x ) .
The inequality implies that the vectors f i 1 ( x * ) , , f i k ( x * ) are linearly independent.
Repeat Step k until the method is finished.
Without loss of generality, assume that the first r vectors { f 1 ( x * ) , , f r ( x * ) } are linearly independent and define mapping F ^ ( x ) as:
F ^ ( x ) = f 1 ( x ) f r ( x ) f ^ r + 1 ( x ) f ^ m ( x ) ,
where vectors f ^ k ( x ) are defined in such a way that:
f ^ k ( x * ) = 0 , k = r + 1 , , m .
Namely, let:
f k ( x * ) = i = 1 r α i k ( x * ) f i ( x * ) , k = r + 1 , , m ,
be a linear combination of the vectors f 1 ( x * ) , , f r ( x * ) . Coefficients α k ( x ) are determined by solving the following system of equations:
f k ( x ) i = 1 r α i k ( x ) f i ( x ) · f j ( x ) = 0 , j = 1 , 2 , , r , k = r + 1 , , m .
In addition, define B ( x ) to be a nonsingular matrix of the form:
B ( x ) = 1 0 0 0 0 1 0 0 α 1 r + 1 ( x ) α r r + 1 ( x ) 1 0 α 1 m ( x ) α r m 0 1 .
Let:
α k ( x ) = ( α 1 k ( x ) , , α r k ( x ) ) , k = r + 1 , , m .
Define the following vectors:
f ^ k ( x ) = f k ( x ) , k = 1 , , r ,
f ^ k ( x ) = f k ( x ) i = 1 r α i k ( x ) f i ( x ) , k = r + 1 , , m .
These vectors allow us to transform the mapping F ( x ) to F ^ ( x ) = B ( x ) · F ( x ) , where f ^ 1 ( x * ) 0 , , f ^ r ( x * ) 0 , and f ^ r + 1 ( x * ) = 0 , , f ^ m ( x * ) = 0 . The purpose of this transformation is to simplify the structure of the projection operators.
We present a simple example to illustrate an application of the proposed method.
Example 2. 
Let F : R 2 R 2 , F ( x ) = ( f 1 ( x ) , f 2 ( x ) ) T , where:
f 1 ( x ) = x 1 + x 2 , f 2 ( x ) = x 1 x 2 .
Then x * = ( 0 , 0 ) T is a solution of F ( x ) = 0 . Take ε = 1 2 and consider x ¯ = 1 2 , 1 2 T N ε ( x * ) . The Jacobian matrix of F is:
F ( x ) = 1 1 x 2 x 1 ,
f 1 ( x * ) = 1 1 , a n d f 2 ( x * ) = 0 0 .
It is easy to see that vectors f 1 ( x * ) and f 2 ( x * ) are linearly dependent. We can check this by applying the method introduced above.
By using Equation (30), we define function μ ( x ) = max F ( x ) 1 2 , ρ ( x ) , where:
ρ ( x ) = sin α f 2 ( x ) ,
and α is the angle between vectors f 1 ( x ) and f 2 ( x ) . Note that:
cos α = f 1 ( x ) · f 2 ( x ) f 1 ( x ) f 2 ( x ) ,
and hence:
ρ ( x ) = f 1 ( x ) 2 f 2 ( x ) 2 f 1 ( x ) · f 2 ( x ) 2 f 1 ( x ) 2 f 2 ( x ) 2 f 2 ( x ) = | x 1 x 2 | 2 .
Using x ¯ = 1 2 , 1 2 T , we obtain:
ρ ( x ¯ ) = 0 , F ( x ¯ ) 1 2 = 1 2 + 1 4 2 4 = 17 16 4 , and so μ ( x ¯ ) = 17 16 4 .
We are ready to apply the method described above.
In Step 1, we obtain f 1 ( x ¯ ) > μ ( x ¯ ) because:
2 > 17 16 8 .
Hence, vector f 1 ( x * ) 0 and i 1 = 1 .
Then in Step 1 of the method with vector f 2 ( x ) , we also verify whether the following inequality holds:
f 2 ( x ¯ ) > μ ( x ¯ ) 1 2 o r x ¯ 2 2 + x ¯ 1 2 > μ ( x ¯ ) 1 2 .
Using point x ¯ = 1 2 , 1 2 T , we obtain:
1 2 < 17 16 8 .
Therefore, we conclude that f 2 ( x * ) = 0 .
Thus, in this example, the mapping F ^ ( x ) defined in (31) has the form F ^ ( x ) = ( f 1 , f ^ 2 ) T , where f 1 ( x ) = x 1 + x 2 and f ^ 2 = x 1 x 2 .

5. Quadratic Programming Problems

In this section, we consider the quadratic programming (QP) problem (3):
minimize x f ( x ) = 1 2 x T Q x + c T x subject to A x b ( QP ) ,
where Q is an n × n symmetric matrix, A is an m × n matrix, c , x R n , and b R m . The Lagrangian for problem (3) is defined by:
L ( x , λ ) = 1 2 x T Q x + c T x + i = 1 m λ i ( a i x b i ) ,
where λ = ( λ 1 , , λ m ) is the vector of Lagrange multipliers and a i is the ith row of the matrix A. The Karush-Kuhn-Tucker (KKT) conditions [17] are satisfied at x * with some λ * R m if:
Q x * + c + i = 1 m λ i * a i T = 0 , λ i * ( a i x * b i ) = 0 , λ i * 0 , a i x * b i ,   for     all   i = 1 , , m .
The point x * at which relations (33) are satisfied is called a stationary point or a KKT point. Observe that ( x * , λ * ) is a solution of the following system:
Φ ( x , λ ) = Q x + c + A T λ Λ ( A x b ) = 0 , Λ = diag ( λ i ) i = 1 , , m , λ i * 0 , A x * b .
We denote by I ( x * ) the set of indices of the active constraints at x * :
I ( x * ) = { i = 1 , , m a i x * = b i } .
The following constraint qualification is used in the paper.
Definition 10 
(Linear independence constraint qualification). The linear independence constraint qualification (LICQ) holds at a feasible point x * if the row-vectors a j , j I ( x * ) , corresponding to the active at x * constraints, are linearly independent.
The modified second-order sufficient conditions (MSOSC) state that there exist a Lagrange multiplier vector λ * and a scalar ν > 0 such that:
ω T 2 L x x ( x * , λ * ) ω ν ω 2
for all ω satisfying:
a i ω = 0 i I ( x * ) .
We divide the presentation in this section into three parts. We start by considering regular QP problems in Section 5.1. Then, in Section 5.2, we discuss the issue of identifying the active constraints and propose numerical strategies for determining the set I ( x * ) . We apply these techniques to degenerate QP problems in Section 5.3.

5.1. Regular Quadratic Programming

In this section, we consider regular quadratic programming (QP) problem (3). In other words, we assume that the Linear Independence Constraint Qualification (LICQ) and the Mangasarian-Fromovitz Constraint Qualification (MFCQ) conditions (35) hold. Recall that A is an m × n matrix of coefficients representing the constraints A x b in problem (3). Without loss of generality, assume that the first p constraints are active at x * , so that:
I ( x * ) = { 1 , , p } .
Then we can rewrite the matrix A in the following form:
A = A A A N ,
where A A is a p × n matrix of coefficients corresponding to the active constraints at x * , and A N is an ( m p ) × n matrix of coefficients corresponding to the nonactive constraints at x * . It is important to note that we do not have prior knowledge of the set I ( x * ) . We will discuss possible numerical realizations to approximate the set of active constraints in Section 5.2. Additionally, we introduce the following notation associated with the active constraints at the point x * :
b A = ( b 1 , , b p ) T , λ A = ( λ 1 , , λ p ) T .
Similarly,
b N = ( b p + 1 , , b m ) T , λ N = ( λ p + 1 , , λ m ) T .
In the following subsections, we will introduce three approaches to solving the QP problem (3) and provide formulas for the solution.

5.1.1. First Approach to Solving the QP Problem

In this subsection, we present an approach to solving the QP problem and obtaining a formula for its solution. This approach is based on the construction of the 2-factor-operator. For our consideration below, we need the following lemma.
Lemma 2. 
Let V be an n × n matrix, G be a p × n matrix, such that the columns of G T are linearly independent, L be an n × l matrix, G N = d i a g ( g i ) i = 1 l be a diagonal full rank matrix, and:
( V x , x ) > 0   f o r   a l l   x Ker G { 0 } .
Then matrix Γ defined by:
Γ = V G T L G O p × p O p × l O l × n O l × p G N
is nonsingular.
Proof. 
To prove the lemma, we must prove that the matrix Γ defined by (38) has zero nullspace. Consider the following system that defines the nullspace of Γ in the form of a vector v = ( x , y , z ) , where x R n , y R p , and z R l :
V x + G T y + L z = 0 G x = 0 G N z = 0 .
Since G N is a full-rank diagonal matrix, the third equation in the system (39) implies that z = 0 . Then, using the first equation, we obtain:
0 = ( V x ) · x + ( G T y ) · x = ( V x ) · x + y · ( G x ) = ( V x ) · x .
Consequently, x = 0 ; otherwise, ( V x ) · x = 0 , which contradicts the assumption (37) of the lemma. Therefore, the first equation in (39) reduces to G T y = 0 , and since the columns of G T are linearly independent, we obtain y = 0 . Thus, the matrix Γ (38) has a zero nullspace, ( x , y , z ) = ( 0 , 0 , 0 ) , and therefore, Γ is nonsingular. This concludes the proof of the lemma. □
Let x R n , λ R m , and mapping Φ be defined in (34), so that Φ ( x , λ ) = Q x + c + A T λ Λ ( A x b ) . Introduce mappings P 1 and P 2 as:
P 1 = I n × n O n × m O m × n O m × m , P 2 = O n × n O n × m O m × n I m × m .
Recall that matrix A A is defined in (36), and introduce vector h ¯ R n + m such that:
h ¯ = ( h 1 , h 2 ) T , h 1 R n , h 2 R m ,
where A A h 1 = 0 , h 1 0 , h 2 = ( ( 1 1 p , ( 0 0 m p ) .
Define mapping Ψ as:
Ψ ( x , λ ) = P 1 Φ ( x , λ ) + P 2 Φ ( x , λ ) h ¯ .
Recall that a i is the ith row of the matrix A and b = ( b 1 , , b m ) T . Then:
Φ ( x , λ ) = Q A T Λ A S , S = a 1 x b 1 0 0 0 0 a m x b m ,
and mapping Ψ defined in (40) can be rewritten as:
Ψ ( x , λ ) = Q x + c + A T λ Λ A h 1 + S h 2 .
Introduce matrix Λ N = diag ( λ i ) i = p + 1 , , m . Then, taking into account the definition of h 1 and h 2 , we obtain:
Ψ ( x , λ ) = Q x + c + A T λ A A x b A Λ N A N h 1 .
Observe that if ( x * , λ * ) is a solution of (34), it is also a solution of Ψ ( x , λ ) = 0 , or, equivalently,
Ψ ( x , λ ) = Q x + c + A T λ A A x b A Λ N A N h 1 = 0 .
To obtain the formula for the solution ( x * , λ * ) , we rewrite the system (41) as:
Q A A T A N T A A O O O ( m p ) × n O K x λ A λ N = c b A O m p , K = a p + 1 h 1 0 0 0 0 a m h 1 .
Assuming that LICQ and MSOSC hold and apply Lemma 2, we obtain that the matrix:
Q A A T A N T A A O O O ( m p ) × n O K
is invertible and obtain the formula for ( x * , λ * ) :
x * λ A * λ N * = Q A A T A N T A A O O O ( m p ) × n O K 1 c b A O m p .

5.1.2. Second Approach to Solving the QP Problem

Assume that we can estimate the set I ( x * ) , which is in our notation I ( x * ) = { 1 , 2 , , p } . Taking into account that λ p + 1 * = 0 , , λ m * = 0 and that A A x * = b A , system (34) can be reduced to the following one:
Φ ( x , λ ) = Q x + c + A A T λ A A A x b A = 0 ,
which can be written as:
Q A A T A A O p × p x λ A = c b A .
Under the assumptions LICQ and MSOSC, the following matrix is invertible,
Q A A T A A O p × p
and system (44) yields the formula for the solution ( x * , λ * ) :
x * λ A * = Q A A T A A O p × p 1 c b A .
Remark 3. 
System (41) reduces to system (43) by removing equations Λ N A N h 1 , corresponding to the nonactive constraints. Similarly, Equation (42) reduces to (45).
Remark 4. 
We note that solutions of QP problems have the following specific property: if x * is a solution of the QP problem and h T L x x ( x , λ ) h = 0 for the vector h Ker A , then the points x = x * + t h are also solutions of the QP problem.

5.1.3. Examples

In this section, we illustrate the two described approaches with examples. Namely, we consider the construction of system (41) required for the first approach. Then we illustrate using the exact formula (45) derived in the second approach.
Example 2. 
Consider the problem:
minimize x 1 2 x 2 2 x 1 subject to x 1 0 , x 2 1 .
The matrix A in this example is A = 1 0 0 1 and b = 0 1 . The solution to this problem is the point ( x 1 * , x 2 * ) = ( 0 , 0 ) with λ 1 * = 1 and λ 2 * = 0 . Hence, I ( x * ) = { 1 } , A A = ( 1 , 0 ) , and b A = 0 . Moreover,
Q = 0 0 0 1 a n d c = 1 0 .
By choosing h 1 = ( 0 , 1 ) T and h 2 = ( 1 , 0 ) T , the system (41) reduces to the linear system:
Ψ ( x , λ ) = Q x + c + A T λ A A x b A Λ N A N h 1 = λ 1 1 x 2 + λ 2 x 1 λ 2 .
Solving the system Ψ ( x , λ ) = 0 yields ( x 1 * , x 2 * , λ 1 * , λ 2 * ) = ( 0 , 0 , 1 , 0 ) , as claimed.
Now, let us illustrate the second approach. Specifically, using the formula (45) for the solution of problem (46) with λ A = λ 1 , we obtain:
x 1 * x 2 * λ 1 * = 0 0 1 0 1 0 1 0 0 1 1 0 0 = 0 0 1 ,
as claimed.
Example 3. 
Consider the problem:
minimize x x 1 2 + 1 2 x 2 2 + x 1 x 2 subject to x 1 1 , x 2 1 .
The solution to this problem is the point ( x 1 * , x 2 * ) = ( 0 , 0 ) with λ 1 * = 0 and λ 2 * = 0 . Hence, I ( x * ) = . Moreover,
Q = 2 1 1 1 a n d c = 0 0 .
By choosing h 1 = ( 1 , 1 ) T and h 2 = ( 0 , 0 ) T , the system (41) reduces to the following linear system for problem (47):
Ψ ( x , λ ) = 2 x 1 + x 2 λ 1 x 1 + x 2 + λ 2 λ 1 λ 2 .
Solving Ψ ( x , λ ) = 0 yields ( x 1 * , x 2 * , λ 1 * , λ 2 * ) = ( 0 , 0 , 0 , 0 ) , as claimed.
To illustrate the second approach, we rewrite the exact formula (45) for the solution of problem (47) in the form:
x 1 * x 2 * = 2 1 1 1 1 0 0 = 0 0 ,
as claimed.

5.1.4. Third Approach to Solving the QP Problem

In this subsection, we present another approach to solving the QP problem. A formula that we obtain for the solution of the QP problem is also based on the construction of the 2-factor-operator.
First, we replace the inequality constraints in the QP problem with equality constraints of the form:
A x b + y 2 = 0 ,
where y 2 = ( y 1 2 , , y m 2 ) T . We then define the Lagrangian as follows:
L ˜ ( x , y , λ ) = 1 2 x T Q x + c T x + i = 1 m λ i ( a i x b i + y i 2 ) .
Introduce the notation:
Λ = diag ( λ j ) j = 1 m , Y = diag ( y j ) j = 1 m , e = ( 1 , 1 , , 1 ) T .
Then the point ( x * , y * , λ * ) is a solution of the following system:
Φ ˜ ( x , y , λ ) = Q x + c + A T λ 2 Λ Y e A x b + y 2 = 0 .
The Jacobian matrix of the system (50) is given by:
Φ ˜ ( x , y , λ ) = Q O n × m A T O m × n 2 Λ 2 Y A 2 Y O .
Then with h = ( h 1 , h 2 , h 3 ) T , h 1 R n , h 2 , h 3 R m , we obtain
Φ ˜ ( x , y , λ ) h = Q h 1 + A T h 3 2 Λ h 2 + 2 Y h 3 A h 1 + 2 Y h 2 .
Assuming that LICQ and MSOSC hold, matrix Φ ˜ ( x * , y * , λ * ) is singular if and only if the strict complementarity condition does not hold. In other words, the set of indices of the weakly active constraints,
I 0 ( x * ) = { j = 1 , , m λ j * = 0 , y j * = 0 } ,
is not empty.
Let P 1 be the matrix of the orthoprojector onto Im Φ ˜ ( x * , y * , λ * ) , and P 2 be the matrix of the orthoprojector onto Im Φ ˜ ( x * , y * , λ * ) . Note that P 1 will be a projector onto the linear part of the mapping Φ , while P 2 will be a projector onto the quadratic part of Φ .
Introduce vector h ^ = ( h ^ 1 , h ^ 2 , h ^ 3 ) such that P 2 Φ ˜ ( x * , y * , λ * ) h ^ = 0 . Then,
2 Λ h ^ 2 + 2 Y h ^ 3 = 0 A h ^ 1 + 2 Y h ^ 2 = 0
or
( h ^ 2 ) i = k , k R { 0 } , when y i * = 0 , and λ i * = 0 , 0 , when y i * = 0 , λ i * 0 , w , w R { 0 } , when y i * 0 , λ i * = 0 ,
( h ^ 3 ) i = t , t R { 0 } , when y i * = 0 , and λ i * = 0 , 0 , when y i * 0 , λ i * = 0 , r , r R { 0 } , when y i * = 0 , λ i * 0 ,
and h ^ 1 is defined by:
A h ^ 1 + 2 Y h ^ 2 = 0 .
Observe that P 2 Φ ˜ ( x * , y * , λ * ) h ^ = 0 , i.e., h ^ Ker P 2 Φ ˜ ( x * , y * , λ * ) .
Define H as a diagonal matrix with elements in the rows corresponding to the components of the vector h ^ 2 , and K as a diagonal matrix with elements of the vector h ^ 3 , so that:
H = diag ( h ^ 2 ) i , K = diag ( h ^ 3 ) i , i = 1 , , m .
Then:
Φ ˜ ( x , y , λ ) h ^ = O O n × m O O m × n 2 K 2 H O m × n 2 H O .
The 2-factor-operator for the mapping Φ ˜ is defined as:
Ψ ˜ ( x , y , λ ) = P 1 Φ ˜ ( x , y , λ ) + P 2 Φ ˜ ( x , y , λ ) h ^
or
Ψ ˜ ( x , y , λ ) = Q x + c + A T λ 2 Λ h ^ 2 + 2 Y h ^ 3 A h ^ 1 + 2 Y h ^ 2 .
We choose a vector h ^ 1 according to (51) so that matrix:
Ψ ˜ ( x * , y * , λ * ) = Q O n × m A T O m × n 2 K 2 H O m × n 2 H O n × m
is nonsingular. Then ( x * , y * , λ * ) can be determined using the following formula:
x * y * λ * = Q O n × m A T O m × n 2 K 2 H O m × n 2 H O m × m 1 c O m A h ^ 1 .

5.2. Identification of the Active Constraints

In this section, we address the issue of identifying the active constraints and propose strategies for numerically identifying the set of active constraints I ( x * ) .
We begin by considering the mapping h ( z ) : R n R n , where h C 2 ( R n ) . We can also represent h as an n-vector of functions h 1 , , h n , such that h ( z ) = ( h 1 ( z ) , , h n ( z ) ) T .
Theorem 8. 
Let h C 2 ( R n R n ) be 2-regular at the point z * , and let N ε ( z * ) be a sufficiently small neighborhood of z * in R n . Assume that there exists a function η ( z ) : N ε ( z * ) R such that η ( z * ) = 0 and for all z N ε ( z * ) , we have:
c 1 z z * 2 η ( z ) c 2 z z * ,
where c 1 , c 2 > 0 are independent constants.
Then there exists a sufficiently small δ such that 0 < δ < ε , and for any 1 i n and any point z N δ ( z * ) , the following holds:
  • Either | h i ( z ) | > η ( z ) 1 / 3 , which implies that h i ( z * ) 0 ,
  • Or | h i ( z ) | < η ( z ) 1 / 3 , which implies that h i ( z * ) = 0 .
Proof. 
The proof is similar to the one in [5]. □
Let:
ϱ ( x ) = min i = 1 , , m d g i ( x ) , span ( g 1 ( x ) , , g i 1 ( x ) , g i + 1 ( x ) , , g m ( x ) ,
where d ( a , S ) denotes the distance between a vector a and a set S. It turns out that if we take η ( x ) = max g ( x ) 1 / 2 , ϱ ( x ) , and g is 2-regular at x * , then inequality (52) holds with z = x .
Theorem 8 can be used for the numerical determination of the set of active constraints I ( x * ) in the QP problem. To apply Theorem 8, we need to define a function η ( · ) that satisfies the conditions of the theorem. Recall that for QP problem (3), we denote the Lagrange function defined in (32) by L ( x , λ ) .
Under the assumptions of LICQ and MSOSC, the following holds for x N ε ( x * ) and λ N ε ( λ * ) :
c 1 x x * L x ( x , λ ) + i = 1 m | min { λ i , b i a i x } | c 2 ( x x * + λ λ * ) ,
where ε > 0 is sufficiently small (see, for example, [18]). Hence, the required function η ( x , λ ) can be defined by:
η ( x , λ ) = L x ( x , λ ) + i = 1 m | min { λ i , b i a i x } | .
Then, according to Theorem 8, for every i = 1 , , m , if:
| b i a i x | η ( x , λ ) 1 / 2 ,
then it follows that i I ( x * ) .
Moreover, if we introduce the function:
η ˜ ( x , λ ) = ( L A ) x ( x , λ ) + i = 1 m | a i x b i | ,
where: L A ( x , λ ) = 1 2 x T Q x + c T x + i = 1 m λ i ( a i x b i ) , then η ˜ ( x , λ ) satisfies the estimate:
c 1 ( x , λ ) ( x * , λ * ) η ˜ ( x , λ ) c 2 ( x , λ ) ( x * , λ * )
for ( x , λ ) N ε ( x * , λ * ) , where ε > 0 is a sufficiently small number.
Then, for any i I ( x * ) , if:
| λ i | η ˜ ( x , λ ) 1 / 2 ,
then i I 0 ( x * ) = I ( x * ) I + ( x * ) . Here, I 0 ( x * ) represents the set of constraints that are weakly active, i.e, for which the associated multipliers are equal to zero, while I + ( x * ) denotes the set of constraints that are strongly active at the point x * , i.e., the associated Lagrange multipliers are positive.

5.3. General Case

Consider the Lagrange function in the form:
L ( x , λ 0 , λ ) = λ 0 1 2 x T Q x + c T x + i = 1 m λ i ( a i x b i ) .
In this case, if x * is a solution of problem (3), then there exist multipliers λ 0 * and λ * , not all zeros, such that λ i * 0 , A x * b , and the point ( x * , λ 0 * , λ * ) is a solution of the following system:
Φ 0 ( x , λ 0 , λ ) = λ 0 ( Q x + c ) + A T λ Λ ( A x b ) λ 0 + i = 1 m λ i 1 = 0 , Λ = diag ( λ i ) i = 1 , , m .
Introduce the notation:
ξ ( x , λ 0 , λ ) = λ 0 ( Q x + c ) + i = 1 m λ i a i T + i = 1 m | λ i ( a i x b i ) | + λ 0 + i = 1 m λ i 1 .
We are making the following assumption for the rest of the section.
Assumption A1. 
Assume that there exists C 1 > 0 and a sufficiently small ε > 0 such that for any ( x , λ 0 , λ ) N ε ( x * , λ 0 * , λ * ) , the following holds:
ξ ( x , λ 0 , λ ) C 1 x x * 2 .
Remark 5. 
It is easy to see that for any ( x , λ 0 , λ ) N ε ( x * , λ 0 * , λ * ) ,
ξ ( x , λ 0 , λ ) C 2 ( x , λ 0 , λ ) ( x * , λ 0 * , λ * ) ,
where C 2 is an independent constant.
As follows from Assumption 1 and Theorem 8, for those indices i = 1 , , m that satisfy the inequalty:
| a i x b i | < ξ ( x , λ 0 , λ ) 1 / 4
we make a conclusion that i I ( x * ) .
We can illustrate Assumption 1 with the following examples, where Assumption 1 holds.
Example 4. 
This example illustrates a choice of the function ξ in a more general setting.
Consider mapping F defined by either:
F ( x , λ ) = x 2 λ 2 x λ
or
F ( x , λ ) = x λ x λ .
In each of the two cases, ( x * , λ * ) = ( 0 , 0 ) .
Introduce function ξ defined as:
ξ ( x , λ ) = F ( x , λ ) .
It follows that for any ( x , λ ) N ε ( x * , λ * ) , the inequality:
ξ ( x , λ ) C x 0 2
holds, where C is an independent constant.
Example 5. 
Consider the problem:
minimize x x 2 2 x 1 subject to 2 x 1 0 , x 1 0 . .
The solution to this problem is the point ( x 1 * , x 2 * ) = ( 0 , 0 ) , so I ( x * ) = { 1 , 2 } . Moreover, the system (53) in this example is given by:
Φ 0 ( x , λ 0 , λ ) = λ 0 + λ 1 + λ 2 2 λ 0 x 2 2 λ 1 x 1 λ 2 x 1 λ 0 + λ 1 + λ 2 1 = 0 .
We also introduce the function ξ ( x , λ 0 , λ ) , which can be defined using Equation (54), but in this case, we define it as ξ ( x , λ 0 , λ ) = | Φ 0 ( x , λ 0 , λ ) | .
Under Assumption 1, we use the function ξ to determine the set I ( x * ) . We also take into account the fact that the constraints in the problem are linear and the rank of the matrix
2 0 1 0
is 1. This implies that the constraints are linearly dependent. Consequently, we can eliminate, for instance, the second constraint from problem (55) and simplify system (56) to the following one:
Φ 0 ( x , λ 0 , λ ) = λ 0 + λ 1 2 λ 0 x 2 2 λ 1 x 1 λ 0 + λ 1 1 = 0 .
Now, by introducing:
P 1 = 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 , P 2 = 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 ,   and   h = 0 0 1 1 ,
we construct the modified 2-factor-system:
Φ ˜ 0 ( x , λ 0 , λ ) = P 1 Φ 0 ( x , λ 0 , λ ) + P 2 Φ 0 ( x , λ 0 , λ ) h = λ 0 + λ 1 2 x 2 2 x 1 λ 0 + λ 1 1 = 0 .
This system implies that the solution is x * = 0 .
Now we will demonstrate the application of the approach described in Section 5.1.2 to problem (55). By removing the first constraint, we obtain a regular QP problem with A A = ( 1 , 0 ) . Additionally, in this example,
Q = 0 0 0 1 , c = 1 0 .
Then application of Equation (45) derived in Section 5.1.2 yields:
x 1 * x 2 * λ * = 0 0 1 0 1 0 1 0 0 1 1 0 0 = 0 0 1 ,
as claimed.
There are various directions in which the approach proposed in this paper can be extended. The next example illustrates a degenerate QP problem, in which MSOSC does not hold at the solution. However, an approach proposed in this paper can be applied to find a solution to this problem. Moreover, the solution set is locally not unique.
There are various directions in which the approach proposed in this paper can be extended. The next example illustrates a degenerate QP problem in which MSOSC does not hold at the solution. However, the approach proposed in this paper can still be applied to find a solution to this problem. It is worth noting that the solution set in this case is locally not unique.
Example 6. 
Consider the problem:
minimize x x 1 x 2 x 1 subject to x 1 0 . .
The solution to this problem is the set of points X * = { ( 0 , x 2 * ) x 2 * R } . We observe that the objective function in this example is satisfied as an equality for any x * X * . Additionally, the system (53) for this example consists of one linear equation and three quadratic equations:
Φ 0 ( x , λ 0 , λ ) = λ 0 x 2 λ 0 + λ 1 λ 0 x 1 λ 1 x 1 λ 0 + λ 1 1 = 0 .
Denote the projection of the point x onto the set X * by P X * ( x ) . Also, define the notation ξ ( x , λ 0 , λ ) = Φ 0 ( x , λ 0 , λ ) . For any point ( x , λ 0 , λ ) N ε ( x * , λ 0 * , λ * ) , we have the inequality:
ξ ( x , λ 0 , λ ) α x P X * ( x ) ,
where α > 0 and ε > 0 is sufficiently small.
Consider, for example, the point x * = ( 0 , 1 ) T .
In problem (57), we replace the inequality x 1 0 with the equation x 1 + y 2 = 0 , where y R . We then introduce the Lagrange function in the form of (49) as follows:
L ( x , y , λ ) = ( x 1 x 2 x 1 ) + λ ( x 1 + y 2 ) .
If λ * is a Lagrange multiplier corresponding to the solution ( x 1 * , x 2 * , y * ) = ( 0 , x 2 * , 0 ) , then the point ( x 1 * , x 2 * , y * , λ * ) is a solution of the following system:
Φ ( x 1 , x 2 , y , λ ) = x 2 1 + λ x 1 2 λ y x 1 + y 2 = 0 .
The Jacobian matrix of this system is given by:
Φ ( x 1 , x 2 , y , λ ) = 0 1 0 1 1 0 0 0 0 0 2 λ 2 y 1 0 2 y 0 .
This Jacobian matrix becomes singular at ( x 1 , x 2 , y , λ ) = ( 0 , x 2 , 0 , λ ) . To overcome this singularity, we can apply the approach described in the paper. Specifically, we notice that Φ ( x 1 * , x 2 * , y * , λ * ) h = 0 for h = ( 0 , 1 , 1 , 1 ) . Moreover, the point ( x 1 * , x 2 * , y * , λ * ) = ( 0 , 1 , 0 , 0 ) is one of the solutions of the system defined in (58), corresponding to a solution of the QP problem (57).
Additionally,
Φ ( x 1 , x 2 , y , λ ) 0 1 1 1 = 0 0 2 λ 2 y 2 y , Φ ( x 1 , x 2 , y , λ ) 0 1 1 1 = 0 0 0 0 0 0 0 0 0 0 2 2 0 0 2 0 .
The 2-factor-operator of Φ ( x 1 * , x 2 * , y * , λ * ) with respect to the vector h = ( 0 , 1 , 1 , 1 ) , which is defined similarly to the operator in Equation (11), is given by:
Φ ( x 1 * , x 2 * , y * , λ * ) + Φ ( x 1 * , x 2 * , y * , λ * ) h = 0 1 0 1 1 0 0 0 0 0 2 2 1 0 2 0 .
Note that the 2-factor-operator is nonsingular and the system:
Φ ( x 1 , x 2 , y , λ ) + Φ ( x 1 , x 2 , y , λ ) h = x 2 1 + λ x 1 2 λ y + 2 λ 2 y x 1 + y 2 + 2 y = 0
has the point ( x 1 * , x 2 * , y * , λ * ) = ( 0 , 1 , 0 , 0 ) as its regular solution.

6. Conclusions

The paper focused on applying the p-regularity theory to nonlinear equations with quadratic mappings and quadratic programming (QP) problems. The first part of the paper used the special structure of the nonlinear equation and the construction of a 2-factor operator to derive a formula for the solution of the equation. In the second part, the QP problem was reduced to a system of linear equations using a 2-factor-operator. The solution of the system is a local minimizer of the QP problem with a corresponding Lagrange multiplier. The formula for the solution of the linear system was given. The paper also described a procedure for identifying the active constraints, which was used in constructing the linear system.
The paper primarily focuses on the case where the matrix F ( x * ) is degenerate at the solution of the nonlinear equation F ( x ) = 0 . However, the matrix F ( x * ) does not need to be degenerate. While we do not explicitly address the identification of degeneracy at a solution point, it is possible to determine the degeneracy of the matrix F ( x * ) by examining the behavior of the mapping F in a small neighborhood of the solution x * . Specifically, a function ν p ( x ) can be defined, such that:
c 1 x x * p ν p ( x ) c 2 x x *
for some natural number p and constants c 1 and c 2 . Based on the conclusion about the degeneracy of the matrix F ( x * ) , an appropriate method can be chosen to solve the system of equations F ( x ) = 0 , as stated in the following theorem.
Theorem 9. 
Let F C p ( R n , R n ) be such that F ( x * ) = 0 , and let there exist x N ε ( x * ) , where ε > 0 is sufficiently small. Then we have the following two cases:
  • In the first case, for all i { 1 , 2 , , n } , we have:
    d f i ( x ) , span f 1 ( x ) , , f i 1 ( x ) , f i + 1 ( x ) , , f m ( x ) > ν p ( x ) 1 / p + 1 .
    In this case, det F ( x * ) 0 , indicating that F is not degenerate at x * .
  • In the second case, there exists an index i { 1 , 2 , , n } such that:
    d f i ( x ) , span f 1 ( x ) , , f i 1 ( x ) , f i + 1 ( x ) , , f m ( x ) < ν p ( x ) 1 / p + 1 .
    In this case, det F ( x * ) = 0 , indicating that F is degenerate at x * .
Certainly, the construction of the function ν p ( x ) is an important consideration. One approach to constructing such a function is provided in the following lemma, specifically for the case of p = 2 .
Lemma 3. 
Let F C 2 ( R n , R n ) and assume that either ( F ( x * ) ) 1 exists, or for any h K e r F ( x * ) , there exists ( F ( x * ) h ) 1 with h = 1 . Then, there exists a sufficiently small ε > 0 such that the following inequality holds for all x N ε ( x * ) :
F ( x ) C x x * 2 ,
where C is a positive constant.
Based on this lemma, one can choose the function ν 2 ( x ) = F ( x ) .
It is worth noting that the proposed approach also covers the case where the system of equations consists of both linear and quadratic equations. Moreover, the approach can be extended to solve multilinear equations with polynomials of degree p, given by the equation:
F ( x ) = Q p [ x ] p + Q p 1 [ x ] p 1 + + Q 1 [ x ] + Q 0 = 0 ,
where Q k [ x ] k is k-multilinear mapping for k = 1 , , p . Additionally, polynomial programming problems can be formulated as follows:
minimize f ( x ) subject to f i ( x ) 0 , i = 1 , , m ,
where f i ( x ) are polynomial mappings.
There are various possible directions for future research work, based on the results obtained in this paper. While the focus of the current work was on obtaining exact formulas for the solutions of nonlinear equations with quadratic mappings and quadratic programming problems, it would be interesting to generalize the proposed approaches to other classes of problems, including systems of equations with both linear and quadratic mappings. Another direction would be an extension of presented methods to polynomial equations and polynomial programming problems. Another direction of future research could be focusing on numerical studies and the implementation of the methods described in the paper.

Author Contributions

Conceptualization, A.A.T.; methodology, O.B., A.P. and A.A.T.; validation, O.B., A.P. and A.A.T.; formal analysis, O.B., A.P. and A.A.T.; investigation, O.B., A.P. and A.A.T.; resources, O.B. and A.A.T.; writing—original draft preparation, O.B. and A.P.; supervision, O.B. and A.A.T.; project administration, A.P.; funding acquisition, A.P. and A.A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Education and Science, grant number 144/23/B.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors thank the anonymous reviewers for their careful reading of our manuscript and for their insightful comments and suggestions that helped us improve the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript.

References

  1. Barvinok, A.; Rudelson, M. When a system of real quadratic equations has a solution. Adv. Math. 2022, 403, 108391. [Google Scholar] [CrossRef]
  2. Bereznev, V.A. Theoretical and Applied Problems of Nonlinear Analysis; Russian Academy of Sciences, Computing Center: Moscow, Russia, 2007. [Google Scholar]
  3. Li, N.; Zhi, L. Improved two-step Newton’s method for computing simple multiple zeros of polynomial systems. Numer. Algorithm 2022, 9, 19–50. [Google Scholar] [CrossRef]
  4. Poirier, A.; Torres, J. Approximating roots by quadratic iteration. Proyecciones J. Math. 2023, 42, 407–431. [Google Scholar] [CrossRef]
  5. Tret’yakov, A.A.; Marsden, J.E. Factor–analysis of nonlinear mappings: p–regularity theory. Commun. Pure Appl. Anal. 2003, 2, 425–445. [Google Scholar]
  6. Anitescu, M. A superlinearly convergent sequential quadratically constrained quadratic programming algorithm for degenerate nonlinear programming. SIAM J. Optim. 2002, 12, 949–978. [Google Scholar] [CrossRef] [Green Version]
  7. Fletcher, R. Resolving degeneracy in quadratic programming. Degeneracy in optimization problems. Ann. Oper. Res. 1993, 46/47, 307–334. [Google Scholar] [CrossRef]
  8. De Marchi, A. On a primal-dual Newton proximal method for convex quadratic programs. Comput. Optim. Appl. 2022, 81, 369–395. [Google Scholar] [CrossRef]
  9. Permenter, F. Log-domain interior-point methods for convex quadratic programming. Optim. Lett. 2023, 17, 1613–1631. [Google Scholar]
  10. Gould, N.I.M.; Orban, D.; Robinson, D.P. Trajectory-following methods for large-scale degenerate convex quadratic programming. Math. Program. Comput. 2013, 5, 113–142. [Google Scholar] [CrossRef] [Green Version]
  11. Yamakawa, Y.; Takayuki, O. A stabilized sequential quadratic semidefinite programming method for degenerate nonlinear semidefinite programs. Comput. Optim. Appl. 2022, 83, 1027–1064. [Google Scholar] [CrossRef]
  12. Dostal, Z.; Brzobohaty, T.; Horak, D.; Kozubek, T.; Vodstrcil, T. On R-linear convergence of semi-monotonic inexact augmented Lagrangians for bound and equality constrained quadratic programming problems with application. Comput. Math. Appl. 2014, 67, 515–526. [Google Scholar] [CrossRef]
  13. Belash, K.N.; Tret’yakov, A.A. Methods for solving degenerate problems. USSR Comput. Math. Math. Phys. 1988, 28, 90–94. [Google Scholar] [CrossRef]
  14. Bertsekas, D.P. Nonlinear Programming; Athena Scientific: Belmont, MA, USA, 1999. [Google Scholar]
  15. Alekseev, V.M.; Tikhomirov, V.M.; Fomin, S.V. Optimal Control; Consultants Bureau: New York, NY, USA; London, UK, 1987. [Google Scholar]
  16. Szczepanik, E.; Tret’yakov, A.A. p-Regularity Theory and Methods of Solving Nonlinear Optimization Problems; Uniwersytet Przyrodniczo-Humanistyczny w Siedlcach: Siedlce, Poland, 2020. (In Polish) [Google Scholar]
  17. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer: New York, NY, USA, 1999. [Google Scholar]
  18. Facchinei, F.; Fisher, A.; Kanzow, C. On the accurate identification of active constraints. SIAM J. Optim. 1998, 9, 14–32. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brezhneva, O.; Prusińska, A.; Tret’yakov, A.A. On the Finite Complexity of Solutions in a Degenerate System of Quadratic Equations: Exact Formula. Entropy 2023, 25, 1112. https://doi.org/10.3390/e25081112

AMA Style

Brezhneva O, Prusińska A, Tret’yakov AA. On the Finite Complexity of Solutions in a Degenerate System of Quadratic Equations: Exact Formula. Entropy. 2023; 25(8):1112. https://doi.org/10.3390/e25081112

Chicago/Turabian Style

Brezhneva, Olga, Agnieszka Prusińska, and Alexey A. Tret’yakov. 2023. "On the Finite Complexity of Solutions in a Degenerate System of Quadratic Equations: Exact Formula" Entropy 25, no. 8: 1112. https://doi.org/10.3390/e25081112

APA Style

Brezhneva, O., Prusińska, A., & Tret’yakov, A. A. (2023). On the Finite Complexity of Solutions in a Degenerate System of Quadratic Equations: Exact Formula. Entropy, 25(8), 1112. https://doi.org/10.3390/e25081112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop