**Proposition 7.** *Let f*(*x*) *be a convex function on* [*a*, *b*]*. Then*

$$
\underline{f}(\mathbf{x}) \le f(\mathbf{x}) \le \overline{f}(\mathbf{x}),
\text{ for all } \mathbf{x} \in [a, b], \tag{5}
$$

*where*

$$\begin{aligned} \underline{f}(\mathbf{x}) &= \max \{ f(a) + f'(a)(\mathbf{x} - a), f(b) + f'(b)(\mathbf{x} - b) \}, \\ \overline{f}(\mathbf{x}) &= f(a) + (f(b) - f(a)) \frac{\mathbf{x} - a}{b - a}. \end{aligned} \tag{6}$$

**Proof.** First, prove that *f*(*x*) is an underestimator for *f*(*x*). For a function *f*(*x*) convex on an interval [*a*, *b*] and a point *t* ∈ [*a*, *b*], the following inequality holds [43]:

$$f(\mathbf{x}) \ge f(t) + f'(t)(\mathbf{x} - t), \text{ for all } \mathbf{x} \in [a, b]. \tag{7}$$

Substituting *t* with *a* and *b* in (7), we get the following system of valid inequalities:

$$\begin{aligned} f(x) &\geq f(a) + f'(a)(x - a), \text{ for all } x \in [a, b],\\ f(x) &\geq f(b) + f'(b)(x - b), \text{ for all } x \in [a, b]. \end{aligned} \tag{8}$$

From (8), it directly follows that

$$f(\mathbf{x}) \ge \max\left(f(a) + f'(a)(\mathbf{x} - a), f(b) + f'(b)(\mathbf{x} - b)\right), \text{ for all } \mathbf{x} \in [a, b].$$

The right part is *f*(*x*) from (6).

Now prove that *<sup>f</sup>*(*x*) is the overestimator for *<sup>f</sup>*(*x*). Taking *<sup>x</sup>*<sup>1</sup> <sup>=</sup> *<sup>a</sup>*, *<sup>x</sup>*<sup>2</sup> <sup>=</sup> *<sup>b</sup>*, *<sup>λ</sup>* <sup>=</sup> *<sup>b</sup>*−*<sup>x</sup> <sup>b</sup>*−*<sup>a</sup>* in the definition of a convex function (3) we obtain:

$$\begin{aligned} f(x) &\le \frac{b-\chi}{b-a} f(a) + \left(1 - \frac{b-\chi}{b-a}\right) f(b) = \left(1 - \frac{\chi - a}{b-a}\right) f(a) + \frac{\chi - a}{b-a} f(b) \\ &= f(a) + (f(b) - f(a)) \frac{\chi - a}{b-a} .\end{aligned}$$

The rightmost part is *f*(*x*) from (6). This completes the proof.

Proposition 7 is illustrated in Figure 1. The figure shows the original function *f*(*x*) (blue curve), its overestimator consisting of one green line segment and the underestimator consisting of two connected line segments *AC* and *CB* marked with red. The estimators are constructed by following (6).

**Figure 1.** The overestimator (green) and the underestimator (red) of a convex function *f*(*x*) on an interval [*a*, *b*].

The similar proposition holds for concave functions.

**Proposition 8.** *Let f*(*x*) *be a concave function over on* [*a*, *b*]*. Then*

$$
\underline{f}(\mathbf{x}) \le f(\mathbf{x}) \le \overline{f}(\mathbf{x}),
\text{ for all } \mathbf{x} \in [a, b], \tag{9}
$$

*where*

$$\begin{aligned} \underline{f}(\mathbf{x}) &= f(a) + (f(b) - f(a)) \frac{\mathbf{x} - a}{b - a}, \\ \overline{f}(\mathbf{x}) &= \min \{ f(a) + f'(a)(\mathbf{x} - a), f(b) + f'(b)(\mathbf{x} - b) \}. \end{aligned} \tag{10}$$

**Proof.** This statement is a straightforward corollary of Proposition 7. Indeed, if *f*(*x*) is concave then −*f*(*x*) is convex and one can apply Formula (6) to obtain its estimators. After changing the sign and reversing the inequalities we get (10).

Fortunately, the minimum and maximum of estimators *f*(*x*) and *f*(*x*) can be found analytically as stated by the following propositions.

**Proposition 9.** *If a function f is convex on an interval* [*a*, *b*] *then*

$$\max\_{\mathbf{x}\in[a,b]} f(\mathbf{x}) = \max(f(a), f(b)), \tag{11}$$

$$\min\_{\mathbf{x}\in[a,b]} f(\mathbf{x}) = f(a), \text{ if } f'(a) \ge 0,\tag{12}$$

$$\min\_{\mathbf{x}\in[a,b]} f(\mathbf{x}) = f(b), \text{ if } f'(b) \le 0,\tag{13}$$

$$\min\_{x \in [a,b]} f(x) \ge \frac{f'(b)f(a) - f'(a)f(b)}{f'(b) - f'(a)} + f'(a)f'(b)\frac{b-a}{f'(b) - f'(a)}\tag{14}$$

*otherwise.*

**Proof.** Equation (11) is obviously valid. Denote *α* = *f* (*a*), *β* = *f* (*b*). Equation (12) follows from the fact the function *f*(*x*) lies above its tangent *f*(*a*) + *α*(*x* − *a*), coincides with it at *x* = *a* and the tangent is a monotonically nondecreasing function. In the same way, Equation (13) is proved.

For the remaining case *α* < 0 < *β* the minimum of the underestimator is achieved at the intersection point of lines defined by *f*(*a*) + *α*(*x* − *a*), *f*(*b*) + *β*(*x* − *b*) (point *C* in the Figure 1). This point is the solution of the following equation:

$$f(a) + \alpha(\ge -a) = f(b) + \beta(\ge -b).$$

Simple transformations yield:

$$f(a) - f(b) + \beta b - \alpha a = \mathfrak{x}(\beta - \alpha).$$

Since *α* = *β* the minimum of the underestimator is achieved at the point

$$\infty = \frac{f(a) - f(b) + \beta b - \alpha a}{\beta - \alpha}.$$

Substituting this value to *f*(*a*) + *α*(*x* − *a*) we obtain:

$$\min\_{x \in [a,b]} \underline{f}(x) = \frac{\beta f(a) - \alpha f(b)}{\beta - \alpha} + \alpha \beta \frac{b - a}{\beta - \alpha}.$$

This concludes the proof.

Similarly, the validity of the following Proposition giving bounds for a concave function is justified.

**Proposition 10.** *If a function f is concave over an interval* [*a*, *b*] *then*

$$\min\_{\mathbf{x}\in[a,b]} f(\mathbf{x}) = \min(f(a), f(b)), \tag{15}$$

$$\max\_{\mathbf{x}\in[a,b]} f(\mathbf{x}) = f(a), \text{ if } f'(a) \le 0,\tag{16}$$

$$\max\_{x \in [a,b]} f(x) = f(b), \text{ if } f'(b) \ge 0,\tag{17}$$

$$\max\_{x \in [a,b]} f(x) \le \frac{f'(b)f(a) - f'(a)f(b)}{f'(b) - f'(a)} + f'(a)f'(b)\frac{b-a}{f'(b) - f'(a)}\tag{18}$$

*otherwise.*

**Proof.** This statement is a straightforward corollary of Proposition 9. Indeed, if *f*(*x*) is concave then −*f*(*x*) is convex and one can apply Formulas (11)–(14) to obtain its estimators. After changing the sign and reversing the inequalities, we get Formulas (15)–(18).

The bounds computed with the help of the Propositions 9 and 10 are often more precise with respect to other bounds. Below we compare the ranges computed according to Propositions 9 and 10 with the results of interval analysis techniques.

#### **4. Numerical Experiments**

In this section, we experimentally evaluate the proposed approach. First, in Section 4.1 the interval bounds and bounds computed with the proposed techniques are compared for a set of functions. In Section 4.2, we study the impact of the accounting of the monotonicity and convexity properties on global optimization algorithms' performance.

#### *4.1. Comparison with Interval Bounds*

We selected two well-known [26,28] interval analysis techniques for computing the range of a function. The first is the *natural interval expansion* that computes the interval bounds of a function's range by applying interval arithmetic rules according to the function's expression. The second approach is so-called first-order Taylor expansion:

$$\mathcal{R}\_f([a,b]) \subseteq f(c) + [a-c, b-c] \cdot \mathbf{f}'([a,b]),\tag{19}$$

where *c* = (*a* + *b*)/2 and **f** ([*a*, *b*]) denotes the natural interval expansion for the derivative of *f*(*x*). The detailed proof of (19) can be found in [26].

**Example 4.** *Let <sup>f</sup>*(*x*) = <sup>−</sup> cos (*x*) <sup>+</sup> *<sup>e</sup>*−*<sup>x</sup> and <sup>a</sup>* <sup>=</sup> 0, *<sup>b</sup>* <sup>=</sup> <sup>1</sup>*. The convexity of <sup>f</sup>*(*x*) *can easily be established by applying evaluation rules introduced in the previous section:*


*Applying* (9)*, we get the following enclosing interval for f*(*x*) *on* [0, 1]*:*

*f*([0, 1]) ⊆ [−0.438, 0],

*with the width* 0.438*. Natural interval expansion gives:*

$$f([0,1]) \subseteq [-0.632, 0.46],$$

*with the width* 1.092 *and the first order Taylor expansion produces*

$$f([0,1]) \subseteq [-0.77, 0.23]$$

*with the width* 1.092*. Thus, the interval computed with the proposed techniques is nearly* 2.5 *times narrower than produced by the natural interval and Taylor expansions.*

It is worth noting that the bounds provided by Propositions 9 and 10 can be computed for functions that are not differentiable at a set of points. It suffices that a function has its derivatives at the ends of the interval. The latter can be computed using the *forward* *mode automatic differentiation* [28], which is merely the application of differentiation rules at a point.

Table 4 compares bounds computed with the interval analysis techniques and the bounds computed by the proposed method for five convex functions. The convexity of these functions can be easily deduced by the introduced convexity evaluation rules. For an interval [*a*, *b*], three bounds are presented in the respective columns:

**Natural**—a bound computed by the natural interval expansion techniques, **Taylor**—a bound computed by the 1st order Taylor expansion,

**Convex**—a bound computed according to Propositions 9 and 10.

**Table 4.** Comparison of natural interval expansion (Natural) Taylor expansion (Taylor) and bounds produced by the proposed techniques (Convex).


For all functions except No. 4, the bound produced by the proposed techniques contain both intervals produced by interval techniques and significantly more tight. For function number 4, the interval computed by the Convex method is narrower than the natural interval expansion but does not enclose it. However, as neither of these intervals contains each other, they can be intersected to obtain a better enclosing interval [1.65, 90.02]. The 5th function is non-differentiable in *x* = 0. Thus the symbolic differentiation does not give a meaningful result, and the Taylor expansion cannot be applied in this case. For that reason, the respective cell is marked with "−".

#### *4.2. Impact on the Performance of Global Search*

In Section 4.1, we observed that accounting convexity can significantly improve the interval bounds. As expected, the application of these bounds entails reducing the number of steps of the global search algorithm.

We implemented a standard branch-and-bound algorithm that uses the lower-bound test to discard subintervals from the further search. The description of this algorithm can be found elsewhere [26,41]. For completeness, we outline it here (Figure 2).

**Figure 2.** The standard branch-and-bound algorithm.

The algorithm operates over a list *L* of intervals, initialized with the feasible set [*a*, *b*] (line 04). The record point (incumbent solution) is initialized with the center of the interval [*a*, *b*] (line 05). The main loop (lines 06—16) iterates until the list *L* is not empty. At each iteration one of the intervals from this set is taken (line 07) and examined. First, the value in the middle of this interval is computed, and if necessary, the record is updated (lines 08–11). The interval extension **y** = **f**(**x**) is computed at the line 12. The interval lying above *f*(*xr*) − *ε* is discarded from the further search. Otherwise, it is partitioned into two smaller intervals. The obtained intervals are added to the list *L* (line 14).

We consider three variants of computing the interval extension:


The described methods can be applied in combination, when the intervals computed by several methods are intersected to obtain the resulting range. We considered four different combinations of the range bounding techniques to compute the enclosing interval of the objective function:


The convexity and the monotonicity are detected by analyzing the ranges of the first/second derivatives in differentiable cases or by using the introduced evaluation rules for the non-differentiable expressions.

Table 5 lists the set of test problems used in the experiments. For each problem, the objective function (*f*(*x*)), the interval ([*a*, *b*]), and the global optimum value (*f*(*x*∗)) are presented. The first ten problems are taken from [29]. The objective functions in these problems have both first and second derivatives.

To demonstrate the applicability of the proposed automatic convexity deduction techniques, we also have added four nondifferentiable problems. Test cases 11 and 12 were proposed by us, and 13 and 14 were taken from [33].

**Table 5.** Test problems.


The results of numerical experiments are summarized in Table 6. The cells contain the number of steps performed by the branch-and-bound method. Columns correspond to different ways for computing the range of the objective functions, and rows correspond to test problems. The Taylor expansion cannot be applied to nondifferentiable problems 11–14, and the respective cells are blank.


**Table 6.** Testing results.

Experimental results demonstrate that the proposed techniques tremendously improve the standard branch-and-bound algorithm's performance that uses the natural interval expansion for the majority of the test problems. The combination of the natural interval expansion and the proposed method always outperform the combination of the natural and the first-order Taylor interval expansion. The comparison of the last two columns of Table 6 indicates that the Taylor expansion version of branch-and-bound can be further improved when combined with the proposed techniques. However, for problems 1 and 5–10, the proposed method does not benefit from the Taylor expansion.
