*4.1. Evaluation Criteria*

To evaluate the performance of MDSVC, we use two external indicators, clustering accuracy (Acc) and Adjusted Rand Index (ARI), as our performance metrics. Table 3 shows the definition of the metrics mentioned.

**Table 3.** Formula of metrics.


Accuracy: *m* is the total number of samples. We use *ci* to represent the number of the *i*-th cluster points classified correctly. We predict the clusters *r* by performing clustering methods and then measure the accuracy according to the true label.

Adjusted Rand Index: [y1, y2, ... , ys] stands for the true labels of datasets, while [c1, c2,... ,cr] stands for the clusters separated by MDSVC. The sum of TP and TN that we need to obtain can represent the consistency between the clustering result and the result of the original cluster labels. We can distinctly compute it through the confusion matrix. The Rand index (RI), which equals (TP + TN)/*C<sup>m</sup>* <sup>2</sup> , represents the frequency of occurrence of agreements over all of the instance pairs. Finally, we can calculate the RI value. However, the RI value is not a constant close to zero for two random label assignments. The ARI, discounting the expected RI of random partition, can however address this issue.
