*3.2. The MDSVC Algorithm*

Due to the simple box constraint and the convex quadratic objective function of our optimization problem, we adopt the DCD algorithm to minimize one of the variables continuously and keep the other variables fixed to obtain the closed form solution. For our problem, we adjust the value of *β<sup>i</sup>* with a step size of *t* to make *f*(*β*) reach the minimum value, while keeping other *β<sup>k</sup>* <sup>=</sup>*<sup>i</sup>* unchanged. Our sub-problem is thus as follows

$$\begin{cases} \min\_{\boldsymbol{\beta}} f(\boldsymbol{\beta} + t\boldsymbol{e}\_{i}) \\ \quad 0 \le \boldsymbol{\beta}\_{i} + t \le \mathcal{C} \end{cases} \tag{20}$$

where *<sup>e</sup><sup>i</sup>* <sup>=</sup> (0, . . . , 1*i*, .., 0)*<sup>m</sup> <sup>T</sup>* denotes the vector with 1 in the *i*-th element and 0 is elsewhere. For function *f*, we have

$$f(\mathfrak{F} + t\mathfrak{e}\_i) = \frac{1}{2}d\_{i\bar{i}}t^2 + \nabla f(\mathfrak{F})\_{\bar{i}}t + f(\mathfrak{F})\tag{21}$$

where *dii* = *e<sup>i</sup> <sup>T</sup>De<sup>i</sup>* is the diagonal entry of *D*. Then we calculate the gradient by the following form

$$\nabla f(\mathcal{J})\_i = \mathbf{e}\_i^T D \mathcal{J} + \mathbf{e}\_i^T \mathbf{F}^T \tag{22}$$

As *f*(*β*) is independent of *t*, we can consider Equation (21) as a function of *t*. Hence, *f*(*β* + *tei*) can be transformed into a simple quadratic function of *t*. Thus, we get the

minimum value of Equation (21) by setting the derivation of the aforementioned function with respect to t to zero. Therefore, *t* is represented as follows

$$t = -\frac{\nabla f(\mathcal{B})\_i}{d\_{ii}} \tag{23}$$

We denote *β<sup>i</sup> ite<sup>r</sup>* as the value of *<sup>β</sup><sup>i</sup>* at the *<sup>i</sup>*-th iteration, thus, the value of *<sup>β</sup><sup>i</sup> iter*+<sup>1</sup> can be obtained as

$$
\beta\_i^{iter+1} = \beta\_i^{iter} - \frac{\nabla f(\mathcal{B})\_i}{d\_{ii}} \tag{24}
$$

Considering the box constraint 0 ≤ *β<sup>i</sup>* ≤ *C* of the problem, we can further obtain the final form of updating *β<sup>i</sup>*

$$\beta\_i \leftarrow \min\left(\max\left(\beta\_i - \frac{\nabla f(\boldsymbol{\beta})\_i}{d\_{ii}}, 0\right), \mathbf{C}\right) \tag{25}$$

According to Equations (16) and (19), we have [<sup>∇</sup> *<sup>f</sup>*(*β*)]*<sup>i</sup>* <sup>=</sup> <sup>2</sup>*e<sup>i</sup> <sup>T</sup>Qα*. Algorithm 1 (MDSVC) describes the procedure of MDSVC with the Gaussian kernel.


Step 1. **Input**: Data set *X*, parameters:[*λ*1, *λ*2, *C*, *q* ], maxIter, *m* Step 2. **Initialization**: *β* = *<sup>λ</sup>*<sup>1</sup> *<sup>m</sup> <sup>e</sup>*, *<sup>α</sup>* <sup>=</sup> <sup>2</sup>*λ*<sup>1</sup> *<sup>m</sup> Ge*, *dii* <sup>=</sup> <sup>2</sup>*e<sup>i</sup> <sup>T</sup>QGei*, *<sup>G</sup>* = (( *<sup>λ</sup>*<sup>1</sup> + <sup>1</sup>)*<sup>Q</sup>* + *<sup>H</sup>* + *<sup>P</sup>*) −1 *Q* Step 3. Iteration(1~maxIter): Iteration stops when the *β* converges. Step 3.1. Randomly disturb *β* and then get the random index *i* Step 3.2. Loop (*i* = 1, 2, . . . , *m*): update gradient and update *β*, *α* alternately. [∇ *f*(*β*)]*<sup>i</sup>* ← 2*ei TQα βi temp* <sup>←</sup> *<sup>β</sup><sup>i</sup> <sup>β</sup><sup>i</sup>* <sup>←</sup> min max *<sup>β</sup><sup>i</sup>* <sup>−</sup> <sup>∇</sup> *<sup>f</sup>*(*β*)*<sup>i</sup> dii* , 0 , *C <sup>α</sup>* <sup>←</sup> *<sup>α</sup>* <sup>+</sup> - *β<sup>i</sup>* − *β<sup>i</sup> temp Gei* Step 4. Output: *α*, *β*.

Meanwhile, we give the analysis of the computational complexity of Algorithm MDSVC, where m denotes the number of the examples and n represents the number of features. We set maxIter to 1000 during our experiments, the time complexity of DCD, thus, can be cast as maxIter\*m\*m. Furthermore, we can infer that the time complexity of DCD in this paper is the sum of time complexity as shown in Table 1. Considering that *m* is much greater than *n*, thus, the time complexity of DCD is *O*(*m*3), and the space complexity of DCD is *O*(*m*2).

**Table 1.** Time Complexity Calculation of formulas involved.

